EU's AI Act (Commission's proposal) | Digital Watch Observatory (2023)

Table of Contents
[CONSIDERATIONS] SECTION I. GENERAL PROVISIONS SECTION II. PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES SECTION III. HIGH RISK AI SYSTEMS Chapter 1. CLASSIFICATION OF AI SYSTEMS AS HIGH RISK Chapter 2. REQUIREMENTS FOR HIGH-RISK AI SYSTEMS Chapter 3. OBLIGATIONS FOR PROVIDERS AND USERS OF HIGH-RISK AI SYSTEMS AND OTHER PARTIES Chapter 4. NOTIFYING AUTHORITIES AND NOTIFYING BODIES Chapter 5. STANDARDS, CONFORMITY ASSESSMENT, CERTIFICATES, REGISTRATION SECTION IV. TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS Section V. MEASURES TO SUPPORT INNOVATION SECTION VI. CONTROL Kapitel 1. European Artificial Intelligence Board Chapter 2. NATIONAL COMPETENT AUTHORITIES SECTION VII. EU DATABASE FOR HIGH RISK STANDING AI SYSTEMS SECTION VIII. POST-MARKET MONITORING, INFORMATION SHARING, MARKET MONITORING Chapter 1. POST-MARKET SURVEILLANCE Chapter 2. SHARING INFORMATION ABOUT EVENTS AND ERRORS Chapter 3. ENFORCEMENT SECTION IX. CODE OF CONDUCT SECTION X. CONFIDENTIALITY AND PENALTIES SECTION XI. DELEGATION OF POWER AND COMMITTEE PROCEDURE SECTION XII. FINAL PROVISIONS APPENDIX APPENDIX. ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES as referred to in Article 3, paragraph APPENDIX II. LIST OF UNION HARMONIZATION LEGISLATION APPENDIX III. HIGH RISK AI SYSTEMS REFERRED TO IN ARTICLE 6, PARAGRAPH APPENDIX IV. TECHNICAL DOCUMENTATION as referred to in Article 11, paragraph APPENDIX V. EU DECLARATION OF CONFORMITY APPENDIX VI. CONFORMITY ASSESSMENT PROCEDURE BASED ON INTERNAL CONTROL APPENDIX VII. CONFORMITY BASED ON ASSESSMENT OF QUALITY MANAGEMENT SYSTEM AND ASSESSMENT OF TECHNICAL DOCUMENTATION APPENDIX VIII. INFORMATION TO BE SUBMITTED WHEN REGISTERING HIGH RISK AI SYSTEMS IN ACCORDANCE WITH ARTICLE 51 APPENDIX IX. UNION LEGISLATION ON LARGE IT SYSTEMS IN THE AREA OF FREEDOM, SECURITY AND JUSTICE FAQs Videos References

Table of Contents

EU's AI Act (Commission's proposal) | Digital Watch Observatory (1)

Proposal for one

REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL ESTABLISHING HARMONIZED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATION

{SEC(2021) 167 final} – {SWD(2021) 84 final} – {SWD(2021) 85 final}

[CONSIDERATIONS]

THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,

Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof,

referring to proposals from the European Commission,

After sending the draft legislative act to the national parliaments,

having regard to the opinion of the European Economic and Social Committee,

with reference to the opinion of the Committee of the Regions,

Acts in accordance with the ordinary legislative procedure,

whereas:

(1) The purpose of this Regulation is to improve the functioning of the internal market by establishing a uniform legal framework, in particular for the development, marketing and use of artificial intelligence in accordance with the values ​​of the Union. This regulation pursues a number of compelling public interests, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services across borders, preventing Member States from imposing restrictions on development , marketing and use of artificial intelligence systems, unless expressly authorized in this Regulation.

(2) Artificial intelligence systems (AI systems) can be easily implemented in several sectors of the economy and society, including cross-border, and circulate throughout the Union. Certain Member States have already examined the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in accordance with fundamental rights obligations. Different national rules can lead to fragmentation of the single market and reduce legal certainty for operators developing or using artificial intelligence. A consistent and high level of protection throughout the Union should therefore be ensured, while disparities hampering the free movement of AI systems and related products and services in the internal market should be prevented by setting uniform obligations for operators and guaranteeing uniform protection of overriding public interest and the rights of individuals throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data regarding restrictions on the use of AI systems for "real-time" remote biometric identification in publicly accessible spaces for the purposes of law enforcement, it is appropriate to base this Regulation, as far as these specific rules are concerned, on Article 16 of the TFEU. In light of these specific rules and the application of Article 16 of the TFEU, it is appropriate to consult the European Data Protection Board.

(3) Artificial intelligence is a rapidly evolving family of technologies that can contribute to a wide range of economic and societal benefits across the full range of industries and social activities. By improving prediction, optimizing operations and resource allocation, and personalizing digital solutions available to individuals and organizations, the use of artificial intelligence can provide companies with important competitive advantages and support socially and environmentally beneficial outcomes, for example in healthcare, agriculture, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.

(4) At the same time, depending on the circumstances of its specific use and application, artificial intelligence may create risks and damage public interests and rights protected by EU law. Such damage may be material or immaterial.

(5) An EU legal framework laying down harmonized rules on artificial intelligence is therefore necessary to promote the development, use and deployment of artificial intelligence in the internal market, while meeting a high level of protection of public interests, such as health and safety and the protection of fundamental rights as recognized and protected by EU law. In order to achieve this objective, rules should be laid down for the marketing and deployment of certain artificial intelligence systems, so as to ensure the smooth functioning of the internal market and that these systems can benefit from the principle of free movement of goods and services. By laying down these rules, this Regulation supports the Union's objective of being a global leader in the development of safe, trustworthy and ethical artificial intelligence, as stated by the European Council33, and it ensures the protection of ethical principles, as specifically requested by the European Parliament34.

(6) The term AI system should be clearly defined to ensure legal certainty, while allowing flexibility to take into account future technological developments. The definition should be based on the key functional properties of the software, particularly the ability, for a given set of human-defined goals, to generate outputs such as content, predictions, recommendations, or decisions that affect the environment the system carries. interact, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated into it (non-embedded ). The definition of AI system should be supplemented by a list of specific techniques and approaches used for its development, which should be kept up to date in the light of market and technological developments through the adoption by the Commission of delegated acts amending that list.

(7) The concept of biometric data used in this Regulation is consistent with and should be interpreted in accordance with the concept of biometric data as defined in Article 4, paragraph 14, of the European Parliament and European Parliament Regulation (EU) 2016/679. The Council, Article 3, paragraph 18, in Regulation (EU) 2018/1725 of the European Parliament and of the Council and Article 3, paragraph 13, in Directive (EU) 2016/680 of the European Parliament and of the Council.

(8) The term remote biometric identification system as used in this Regulation should be defined functionally as an AI system intended to identify natural persons at a distance by comparing a person's biometric data with the biometric data contained in a reference database and without prior knowledge of whether the targeted individual will be present and identifiable regardless of the particular technology, processes or types of biometric data used. Given their different characteristics and the ways in which they are used, as well as the different risks involved, a distinction should be made between "real-time" and "post" remote biometric identification systems. In the case of "real-time" systems, the collection of the biometric data, the comparison and the identification all happen instantaneously, almost instantaneously, or in any case without significant delay. In this regard, it should not be possible to circumvent the rules of this Regulation on the "real-time" use of the AI ​​systems in question by providing for minor delays. 'Real-time' systems involve the use of 'live' or 'near-'live'' material, such as video footage, generated by a camera or other device with similar functionality. In the case of "post" systems, on the other hand, the biometric data has already been recorded and the comparison and identification takes place only after a considerable delay. This concerns material, such as images or video recordings, generated by closed-circuit television cameras or private devices, which have been generated prior to the use of the system towards the natural persons concerned.

(9) For the purposes of this Regulation, the term publicly accessible space should be understood as referring to any physical place accessible to the public, regardless of whether that place is privately or publicly owned. Therefore, the term does not cover places that are of a private nature and are not normally freely accessible to third parties, including law enforcement authorities, unless those parties are specifically invited or authorized, such as residences, private clubs, offices, warehouses and factories. Online rooms are also not covered, as they are not physical rooms. However, the mere fact that certain conditions for access to a separate space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. In addition to public spaces such as streets, relevant parts of public buildings and most of the transport infrastructure, spaces such as cinemas, theatres, shops and shopping centers are therefore usually also publicly accessible. However, whether a given space is accessible to the public should be decided on a case-by-case basis, taking into account the special circumstances of the individual situation at hand.

(10) In order to ensure a level playing field and an effective protection of the rights and freedoms of individuals throughout the Union, the rules of this Regulation should apply to providers of AI systems in a non-discriminatory manner, regardless of whether they are established in the Union or in a third country, and to users of AI systems established in the Union.

(11) In view of their digital nature, certain artificial intelligence systems should fall within the scope of this Regulation even when they are not placed on the market, put into service or used in the Union. This is e.g. the case of an operator established in the Union contracting certain services to an operator established outside the Union in connection with an activity to be performed by an AI system that can be qualified as high risk and the effects of which affect natural persons located in the Union. In these circumstances, the AI ​​system used by the operator outside the Union could process data lawfully collected in and transferred from the Union and provide the contracting operator in the Union with the output of the AI ​​system resulting from this processing without this AI system being placed on the market, put into service or used in the Union. In order to prevent the circumvention of this Regulation and to ensure the effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems established in a third country, to the extent that the output produced by these systems are used in the Union. In order to take into account existing arrangements and particular needs for cooperation with foreign partners with whom information and evidence are exchanged, this Regulation should nevertheless apply to public authorities of a third country and international organizations when acting within the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements are concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.

(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems developed or used exclusively for military purposes should be excluded from the scope of this Regulation if such use falls within the exclusive competence of the Common Foreign and Security Policy regulated under Title V of the Treaty on European Union (TEU ). should not affect the provisions on the liability of intermediary service providers in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].

(13) In order to ensure a consistent and high level of protection of public interests in terms of health, safety and fundamental rights, common normative standards should be established for all high-risk AI systems. These standards should be in line with the Charter of Fundamental Rights of the European Union (the Charter) and should be non-discriminatory and consistent with the Union's international trade obligations.

(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. This approach should tailor the type and content of such rules to the intensity and scale of the risks that AI systems can generate. It is therefore necessary to ban certain types of artificial intelligence, to set requirements for high-risk AI systems and obligations for the relevant operators, and to set transparency obligations for certain AI systems.

(15) Apart from the many beneficial uses of artificial intelligence, this technology can also be misused and provide new and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they are contrary to the Union's values ​​of respect for human dignity, freedom, equality, democracy and the rule of law and the Union's fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child .

(16) The marketing, deployment or use of certain artificial intelligence systems intended to distort human behavior in a way that is likely to cause physical or psychological harm should be prohibited. Such AI systems implement subliminal components that individuals cannot perceive or exploit vulnerabilities of children and humans due to their age, physical or mental disabilities. They do so with the intention of significantly distorting a person's behavior and in a way that causes or is likely to cause harm to that person or another person. The intent cannot be presumed if the distortion of human behavior is due to factors outside the AI ​​system, beyond the provider's or user's control. Research for legitimate purposes in relation to such AI systems should not be stifled by the ban if such research does not correspond to use of the AI ​​system in human-machine relationships that exposes natural persons to harm, and such research is carried out in accordance with recognized ethical standards for scientific research.

(17) AI systems providing social scoring of natural persons for general purposes by public authorities or on their behalf may lead to discriminatory results and the exclusion of certain groups. They can violate the right to dignity and non-discrimination and the values ​​of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behavior in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to harmful or unfavorable treatment of natural persons or entire groups thereof in social contexts that are not related to the context in which the data was originally generated or collected, or to a harmful treatment , that is disproportionate or unjustified to the seriousness of their social behavior. Such artificial intelligence systems should therefore be banned.

(18) The use of AI systems for "real-time" remote biometric identification of natural persons in publicly accessible spaces for the purposes of law enforcement is considered to be particularly intrusive to the rights and freedoms of the persons concerned, to the extent that it may affect private life for a large part of the population, induce a feeling of constant surveillance and indirectly discourage the exercise of freedom of assembly and other fundamental rights. Moreover, the immediate effect and the limited possibilities for further checks or corrections associated with the use of such systems operating in "real time" entail increased risks to the rights and freedoms of the persons affected by law enforcement activities.

(19) The use of these systems for law enforcement purposes should therefore be prohibited, except in three exhaustively listed and narrowly defined situations where the use is strictly necessary to achieve a substantial public interest whose importance outweighs the risks. These situations involve searching for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or to a terrorist attack; and the detection, location, identification or prosecution of perpetrators or suspects of the criminal offenses referred to in Council Framework Decision 2002/584/JHA, if those criminal offenses are punishable in the Member State concerned by a custodial sentence or a custodial sentence of a maximum period of at least three years and as defined in the legislation of the Member State concerned. Such a threshold for custodial sentence or deprivation of liberty in accordance with national law helps to ensure that the offense should be serious enough to potentially justify the use of "real-time" remote biometric identification systems. Of the 32 criminal offenses listed in Council Framework Decision 2002/584/JHA, some are likely to be more relevant in practice than others, as the use of "real-time" biometric remote identification will predictably be necessary and proportionate to very varying degrees for the practical exercise of the detection, location, identification or prosecution of a perpetrator or suspect of the various criminal offenses listed and taking into account the likely differences in the seriousness, likelihood and extent of harm or possible adverse consequences.

(20) In order to ensure that these systems are used in a responsible and proportionate way, it is also important to state that in each of these three exhaustively listed and narrowly defined situations certain elements should be taken into account, inter alia takes into account the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the security measures and conditions established in the use. In addition, the use of "real-time" remote biometric identification systems in publicly accessible spaces for law enforcement purposes should be subject to appropriate limitations in time and space, in particular taking into account evidence or indications regarding the threats. victims or perpetrator. The reference database of people should be appropriate for each use case in each of the above three situations.

(21) Any use of a "real-time" remote biometric identification system in publicly accessible spaces for law enforcement purposes should be subject to an express and specific authorization by a judicial authority or by an independent administrative authority of a Member State. Such permission should in principle be obtained prior to use, except in duly justified urgent situations, i.e. situations where the need to use the systems in question is of such an extent that it is realistically and objectively impossible to obtain a permit before starting the permit. use. In such urgent situations, use should be limited to the absolute minimum and subject to appropriate safeguards and conditions, as set out in national law and specified by the law enforcement authority itself in each case of urgency. In addition, in such situations, the law enforcement agency should seek to obtain a permit as soon as possible, while providing justification for not having been able to request it earlier.

(22) It is also appropriate, within the exhaustive framework laid down in this Regulation, to provide that such use in the territory of a Member State in accordance with this Regulation should only be possible where and to the extent that the Member State concerned has decided to expressly provide for the possibility of allowing such use in its detailed rules of national law. As a result, Member States are free under this Regulation not to allow such an option at all or to allow such an option only for some of the objectives that can justify authorized use identified in this Regulation.

(23) The use of artificial intelligence systems for "real-time" biometric remote identification of natural persons in publicly accessible spaces for the purposes of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation, which with certain exceptions prohibit such use based on Article 16 of the TFEU, should apply aslex specialis with respect to the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680 , and regulate thus, such use and the processing of biometric data involved in an exhaustive manner. Such use and processing should therefore only be possible to the extent that it is compatible with the framework laid down in this regulation, without outside of this framework being possible for the competent authorities, where they act with a view to law enforcement, may use such systems and process such data in connection therewith on the grounds set out in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of "real-time" biometric remote identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for law enforcement purposes laid down in this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement for an authorization under this Regulation and the applicable detailed rules of national law that may give effect thereof.

(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, except in connection with the use of "real-time" remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where these systems are used by competent authorities in publicly accessible spaces for purposes other than law enforcement, should continue to comply with all requirements resulting from Article 9(1) 1 of Regulation (EU) 2016/679, Article 1 of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.

(25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of ​​freedom, security and justice, annexed to the TEU and TFEU, Ireland is not bound by Rules 1(d), 2 and 3, of this Regulation adopted on the basis of Article 16 of the TFEU, which concerns the processing of personal data by Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules for forms of judicial cooperation in criminal proceedings or police cooperation, which require compliance with the provisions based on Article 16 of the TFEU.

(26) In accordance with Articles 2 and 2a of Protocol No. 22 on the position of Denmark, which is attached as an annex to the TEU and TFEU, Denmark is not bound by the rules in Article 5, subsection 1, letter d), subsection and 3, of this Regulation adopted on the basis of Article 16 of the TFEU or subject to their application, and which concern the Member States' processing of personal data when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of the third part of the TFEU.

(27) High-risk AI systems should only be placed on the EU market or put into service if they meet certain mandatory requirements. These requirements should ensure that high-risk AI systems that are available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests recognized and protected by Union law. AI systems identified as high-risk systems should be limited to those that have a significant adverse impact on the health, safety and fundamental rights of individuals in the Union, and such limitation minimizes any potential restriction on international trade, if any.

(28) AI systems may have negative consequences for the health and safety of individuals, in particular when such systems act as components of products. In line with the objectives of EU harmonization legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compatible products find their way onto the market, it is important that the security risks that may arise by a product as a whole due to its digital components, including AI systems, is properly prevented and mitigated. For example, increasingly autonomous robots, whether related to manufacturing or personal assistance and care, should be able to safely operate and perform their functions in complex environments. Similarly, increasingly sophisticated diagnostic systems and systems that support human decisions should be reliable and accurate in the healthcare sector, where the stakes for life and health are particularly high. The extent of the adverse impact caused by the AI ​​system on the fundamental rights protected by the Charter is of particular relevance when an AI system is classified as high risk. These rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and association and non-discrimination, consumer protection, labor rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right to defense and presumption of innocence, right to good administration. In addition to these rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the UN Convention on the Rights of the Child (elaborated in UNCRC General Comment No. 25 regarding the digital environment), both of which requires consideration of the vulnerability of children and the provision of such protection and care as is necessary for their well-being. The fundamental right to a high level of environmental protection, enshrined in the Charter and implemented in Union policies, should also be taken into account when assessing the seriousness of the damage that an AI system can cause, including in relation to health and personal safety.

(29) As regards high-risk AI systems that are security components of products or systems, or that are themselves products or systems that fall within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council , Regulation (EC) No. EU) No. 167/2013 of the European Parliament and the Council, Regulation (EU) No. 168/2013 of the European Parliament and the Council, Directive 2014/90/EU of the European Parliament and the Council, Directive ( EU) 2016/797 of the European Parliament and the Council, Regulation (EU) 2018/858 of the European Parliament and Council, Regulation (EU) 2018/1139 of the European Parliament and Council and Regulation (EU) 2019/2144 of the European Parliament and of the Council , it is appropriate to amend these acts to ensure that the Commission takes into account the technical and legislative specificities of each sector and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein. , the mandatory requirements for high-risk AI systems set out in this Regulation when adopting relevant future delegated or implementing acts on the basis of those acts.

(30) As regards AI systems that are safety components of products or that are themselves products that fall within the scope of certain Union harmonization legislation, it is appropriate to classify them as high risk under this Regulation, if the product in question undergoes a conformity assessment procedure with a third-party conformity assessment body under the relevant EU harmonization legislation. Such products are in particular machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, equipment for recreational vessels, cableway installations, apparatus burning gaseous fuel, medical equipment and in vitro diagnostic devices.

(31) The classification of an AI system as high risk under this Regulation should not necessarily mean that the product whose security component is the AI ​​system or the AI ​​system itself as a product is considered "high risk" under criteria laid down in the relevant EU harmonization legislation applicable to the product. This is particularly the case for Regulation (EU) 2017/745 of the European Parliament and of the Council and Regulation (EU) 2017/746 of the European Parliament and of the Council, where a third-party conformity assessment is provided for medium-risk and high-risk products.

(32) As far as autonomous AI systems are concerned, i.e. high-risk AI systems other than those that are safety components of products or that are themselves products, it is appropriate to classify them as high-risk systems if, in light of their intended purpose, they pose a high risk of harm to health and safety or the fundamental rights of persons, taking into account both the seriousness of the possible harm and the probability of its occurrence, and they are used in a number of specifically predefined areas specified in the regulation. The identification of these systems is based on the same methodology and criteria that are also foreseen for any future changes to the list of high-risk AI systems.

(33) Technical inaccuracies in AI systems intended for remote biometric identification of natural persons may lead to biased results and have discriminatory effects. This is particularly relevant when it comes to age, ethnicity, gender or disability. Therefore, 'real-time' and 'after' remote biometric identification systems should be classified as high risk. In light of the risks they pose, both types of remote biometric identification systems should be subject to specific requirements for logging capacity and human supervision.

(34) With regard to the management and operation of critical infrastructure, it is appropriate to classify AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heat and electricity, as their failure or malfunctions can endanger human life and health on a large scale and lead to noticeable disruptions in the normal conduct of social and economic activities.

(35) AI systems used in education or training, in particular to determine access to or assign people to education and training institutions or to evaluate people in tests as part of or as a prerequisite for their training, should be considered high risk, as they can determine the educational and professional course of a person's life and therefore affect the person's ability to secure his or her livelihood. When improperly designed and used, such systems can violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.

(36) AI systems used in employment, labor management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions on promotion and dismissal and for the allocation of tasks, monitoring or evaluation of persons in work-related contractual relationships, should also are classified as high risk, as these systems can significantly affect the future career opportunities and livelihoods of these individuals. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission's work program 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, e.g. against women, certain age groups, persons with disabilities or persons of certain racial or ethnic groups. origin or sexual orientation. AI systems used to monitor the performance and behavior of these individuals may also affect their rights to data protection and privacy.

(37) Another area where the use of artificial intelligence systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to participate fully in society or improve one's standard of living. In particular, AI systems used to assess the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, as they determine those persons' access to financial resources or essential services such as housing, electricity and telecommunications services. AI systems used for this purpose may lead to discrimination against individuals or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origin, disability, age, sexual orientation, or create new forms of discriminatory effects. Given the very limited scope of the impact and the alternatives available on the market, it is appropriate to exempt AI systems for the purposes of credit assessment and credit assessment when deployed by small providers for their own use. Natural persons who apply for or receive public assistance and services from public authorities are typically dependent on these benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used to determine whether such benefits and services should be denied, reduced, revoked or revoked by authorities, they may have a significant impact on people's livelihoods and may violate their fundamental rights, such as the right to social protection, non-discrimination , human dignity or an effective remedy. These systems should therefore be classified as high risk. Nevertheless, this Regulation should not impede the development and use of innovative approaches in public administration, which could benefit from a wider use of compatible and secure AI systems, provided that these systems do not pose a high risk of legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatch of emergency services should also be classified as high risk, as they make decisions in very critical situations for the life and health of persons and their property.

(38) Actions by law enforcement authorities involving certain uses of AI systems are characterized by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of liberty of a natural person as well as other negative impacts on fundamental rights guaranteed by the Charter. In particular, if the AI ​​system is not trained with high-quality data, does not meet sufficient requirements regarding its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into use, it may single people out in a discriminatory or otherwise incorrect or unfair manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial, as well as the right to the defense and the presumption of innocence, may be hampered, especially where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high risk a number of AI systems intended to be used in law enforcement contexts where accuracy, reliability and transparency are particularly important to avoid negative impacts, maintain public trust and ensure accountability and effective redress . Given the nature of the activities in question and the risks associated with them, these high-risk AI systems should in particular include AI systems intended to be used by law enforcement agencies for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural persons, to detect "deep fakes", to evaluate the reliability of evidence in criminal cases, to predict the occurrence or repetition of an actual or potential criminal act based on the profiling of physical persons, or assessment of personality traits and characteristics or past criminal behavior of natural persons or groups, for profiling in connection with the detection, investigation or prosecution of criminal offences, as well as crime analysis relating to natural persons. AI systems specifically intended to be used for administrative procedures by tax and customs authorities, should not be considered high-risk AI systems used by law enforcement agencies for the prevention, detection, investigation and prosecution of criminal offences.

(39) AI systems used for migration, asylum and border control affect people who are often in a particularly vulnerable position and who depend on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI ​​systems used in these contexts are therefore particularly important to ensure respect for the fundamental rights of the persons concerned, in particular their right to free movement, non-discrimination , protection of privacy and personal data, international protection and good governance. It is therefore appropriate to classify high-risk AI systems intended to be used by the competent public authorities with tasks in the field of migration, asylum and border control, as polygraphs and similar tools or to detect the emotional state of a natural person. ; for the assessment of certain risks associated with natural persons entering the territory of a Member State or applying for a visa or asylum; for checking the authenticity of the relevant documents of natural persons in order to assist competent public authorities in the processing of applications for asylum, visas and residence permits and associated appeals with regard to the objective of establishing the eligibility of the natural persons applying for status. AI systems in the field of migration, asylum and border control management covered by this regulation should comply with the relevant procedural requirements laid down in the European Directive 2013/32/EU of the Parliament and of the Council, Regulation (EC) No. 810/2009 of the European Parliament and of the Council and other relevant legislation.

(40) Certain artificial intelligence systems intended for administration of justice and democratic processes should be classified as high risk given their potentially significant impact on democracy, the rule of law, individual liberties and the right to an effective remedy and to a fair trial. In particular, to address the risks of potential bias, error and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in examining and interpreting the facts and the law, and in applying the law to a specific set of facts. However, such a qualification should not include AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymization or pseudonymisation of court decisions, documents or data, communication between staff, administrative tasks or allocation of resources.

(41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as an indication that the use of the system is necessarily legal under other EU legislation or under national legislation that is compatible with EU law, such as protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to take place solely in accordance with the applicable requirements arising from the Charter and from applicable secondary EU and national law. This Regulation should not be understood as a provision on the legal basis for the processing of personal data, including special categories of personal data where relevant.

(42) In order to mitigate the risks posed by high-risk AI systems brought or otherwise put into use on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and according to the risk management system to be established by the provider.

(43) Requirements should apply to high-risk AI systems with regard to the quality of the datasets used, technical documentation and registration, transparency and provision of information to users, human supervision and robustness, accuracy and cyber security. These requirements are necessary to effectively mitigate risks to health, safety and fundamental rights, as appropriate in light of the intended purpose of the system, and no other less trade-restrictive measures are reasonably available, thereby avoiding unjustified trade restrictions.

(44) High data quality is critical to the performance of many AI systems, especially when techniques involving training models are used to ensure that the high-risk AI system functions as intended and safely, and that it does not becomes a source of discrimination prohibited under EU law. High-quality training, validation, and test datasets require the implementation of appropriate data governance and management practices. Training, validation and test datasets should be sufficiently relevant, representative and error-free and complete in light of the intended purpose of the system. They should also have the relevant statistical characteristics, including with respect to the individuals or groups of individuals on whom the high-risk AI system is intended to be used. In particular, training, validation and test data sets should, to the extent required in light of their intended purpose, take into account the properties, characteristics or elements specific to the specific geographical, behavioral or functional setting or context within which The AI ​​system is intended to be used. To protect the rights of others against the discrimination that may result from the biases of AI systems, providers should also be able to treat special categories of personal data as a matter of significant public interest, to ensure bias monitoring, detection and correction in relation to high-risk AI systems.

(45) For the development of high-risk AI systems, certain actors, such as providers, authorized bodies and other relevant entities, such as digital innovation hubs, test facilities and researchers, should be able to access and use high-quality datasets within their respective areas of activity related to this regulation. European common data spaces established by the Commission and facilitating data sharing between companies and with authorities in the public interest will be instrumental in providing trusted, responsible and non-discriminatory access to high-quality data for training, validation and testing of AI systems. For example, in health, the European Health Data Space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on these data sets in a privacy-preserving, secure, timely, transparent and trustworthy manner and with appropriate institutional governance. Relevant competent authorities, including sector-specific ones, that provide or support access to data can also support the provision of high-quality data for training, validation and testing of AI systems.

(46) It is important to have information on how high-risk AI systems have been developed and how they operate throughout their life cycle in order to verify compliance with the requirements of this Regulation. This requires record keeping and the availability of a technical documentation containing information necessary to assess the AI ​​system's compliance with the relevant requirements. Such information should include the general characteristics of the system, capabilities and limitations, algorithms, data, training, testing and validation processes used, as well as documentation about the relevant risk management system. The technical documentation must be kept up to date.

(47) In order to address the opacity that may make certain AI systems incomprehensible or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system's output and use it correctly. High-risk AI systems should therefore be accompanied by relevant documentation and instructions for use and contain concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where relevant.

(48) High-risk AI systems should be designed and developed in such a way that natural persons can monitor their operation. For this purpose, appropriate human supervision measures should be identified by the provider of the system before it is placed on the market or put into use. In particular, such measures should guarantee, where appropriate, that the system is subject to built-in operational constraints that cannot be overridden by the system itself and that respond to the human operator, and that the natural persons to whom human supervision has been assigned, have the necessary competence, training and authority to perform this role.

(49) High-risk AI systems should operate consistently throughout their life cycle and meet an appropriate level of accuracy, robustness and cyber security in accordance with the generally accepted state of the art. The level of accuracy and accuracy measurements should be communicated to users.

(50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient to risks associated with system limitations (e.g. bugs, errors, inconsistencies, unexpected situations) as well as to malicious actions that could compromise the security of the AI ​​system and result in harmful or otherwise undesirable behavior . Failure to protect against these risks may lead to security impacts or adversely affect fundamental rights, for example due to erroneous decisions or incorrect or biased outputs generated by the AI ​​system.

(51) Cybersecurity plays a critical role in ensuring that AI systems are resilient to attempts to change their use, behavior, performance or compromise their security features by malicious third parties exploiting system vulnerabilities. Cyber-attacks against AI systems can exploit AI-specific assets, such as training datasets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI ​​system's digital assets or the underlying ICT infrastructure. Therefore, to ensure a level of cybersecurity appropriate to the risks, appropriate measures should be taken by the providers of high-risk AI systems, also taking into account the underlying ICT infrastructure.

(52) As part of EU harmonization legislation, rules for the marketing, deployment and use of high-risk AI systems should be laid down in accordance with Regulation (EC) No 765/2008 of the European Parliament and of the European Parliament. The Council establishing the requirements for accreditation and market surveillance of products, Decision No. 768/2008/EC of the European Parliament and of the Council on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Councilon market surveillance and compliance of products ('New legislative framework for the marketing of products').

(53) It is appropriate that a specific natural or legal person, defined as the provider, assumes responsibility for the marketing or deployment of a high-risk AI system, regardless of whether the natural or legal person is the person who designed or developed the system.

(54) The provider should establish a sound quality management system, ensure the implementation of the required conformity assessment procedure, prepare the relevant documentation and establish a robust post-market surveillance system. Public authorities adopting high-risk AI systems for their own use may adopt and implement the quality management system rules as part of the quality management system adopted at the national or regional level, as appropriate, taking into account the particularities of the sector and the competences and organization of the relevant public authority.

(55) If a high-risk AI system that is a security component of a product covered by a relevant sectoral new regulatory framework is not placed on the market or put into service independently of the product, the manufacturer of the end product as defined in the relevant new regulatory framework legislation, should comply with the provider's obligations set out in this Regulation and in particular ensure that the AI ​​system embedded in the final product complies with the requirements of this Regulation.

(56) In order to enable the enforcement of this Regulation and to create a level playing field for operators and taking into account the different ways of making available digital products, it is important to ensure that a person established in the Union can in any case provide the authorities all necessary information about an AI system's compliance. Before making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union must therefore appoint an authorized representative established in the Union by written power of attorney.

(57) In line with the principles of the new regulatory framework, specific obligations should be laid down for relevant economic operators, such as importers and distributors, to ensure legal certainty and facilitate the relevant operators' compliance with the legislation.

(58) Considering the nature of AI systems and the risks to security and fundamental rights that may be associated with their use, including with regard to the need to ensure proper monitoring of the performance of an AI system in a real world , it is appropriate to determine specific areas of responsibility for users. In particular, users should use high-risk AI systems in accordance with the instructions for use, and certain other obligations should be laid down in terms of monitoring the operation of AI systems and in terms of registration, as applicable.

(59) It is appropriate to envisage that the user of the AI ​​system must be the natural or legal person, public authority, agency or other body under whose authority the AI ​​system is operated, unless the use is in connection with a personal non-professional activity.

(60) In view of the complexity of the artificial intelligence value chain, relevant third parties, in particular those involved in the sale and supply of software, software tools and components, pre-trained models and data, or network service providers, should cooperate where where relevant, with providers and users to enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation.

(61) Standardization should play a key role in providing technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonized standards as defined in Regulation (EU) No. 1025/2012 of the European Parliament and of the Council54should be a means for providers to demonstrate compliance with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where harmonized standards do not exist or where they are insufficient.

(62) In order to ensure a high level of credibility of high-risk AI systems, those systems should be subject to a conformity assessment before being placed on the market or put into use.

(63) In order to minimize the burden on operators and avoid any possible overlap, it is appropriate for high-risk AI systems in relation to products covered by existing Union harmonization legislation under the new regulatory framework to comply with those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already provided for in this legislation. Thus, the applicability of the requirements of this Regulation should not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific new regulatory framework legislation. This approach is fully reflected in the interaction between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are covered by the requirements of this Regulation, certain specific requirements of the [Machinery Regulation] will ensure safe integration of the AI ​​system into the overall machinery so as not to compromise the safety of the machinery as a whole . The [Machinery Regulation] applies the same definition of AI system as this Regulation.

(64) Considering the more extensive experience of professional certification companies in the field of product safety and the different nature of the risks involved, it is appropriate to limit the scope, at least in an initial phase of the application of this Regulation. of third-party conformity assessment for high-risk AI systems, other than those related to products. Therefore, the conformity assessment of such systems should, as a general rule, be carried out by the provider at its own risk, with the only exception of AI systems intended to be used for remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen , to the extent that they are not prohibited.

(65) In order to carry out third-party conformity assessment for AI systems intended to be used for biometric remote identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided that they comply with a set of requirements, in particular on independence, competence and absence of conflicts of interest.

(66) In line with the generally established concept of significant change for products regulated by EU harmonization legislation, it is appropriate for an AI system to undergo a new conformity assessment when a change occurs which may affect the system's compliance with this Regulation or when the intended purpose of the system changes. For AI systems which continue to 'learn' after being brought to market or put into use (ie they automatically adapt how functions are performed), it is necessary to provide rules which state that changes in the algorithm and its performance as predetermined by the provider and assessed at the time of the conformity assessment should not constitute a significant change.

(67) High-risk AI systems should be CE marked to indicate their compliance with this Regulation, so that they can move freely in the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements of this Regulation and are CE marked.

(68)Under certain conditions, rapid availability of innovative technologies can be essential for human health and safety and for society as a whole. It is therefore appropriate that, under special considerations of public safety or the protection of the life and health of natural persons and the protection of industrial and commercial property rights, Member States may allow the marketing or commissioning of AI systems which have not undergone a conformity assessment.

(69)In order to facilitate the work of the Commission and the Member States in the field of artificial intelligence as well as to increase transparency towards the public, providers of high-risk AI systems, other than those related to products falling within the scope of relevant existing EU harmonization legislation, may should be required to register their high-risk AI system in an EU database to be set up and managed by the Commission. The Commission should be the controller of this database in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55. In order to ensure the full functionality of the database when implemented, the procedure for establishing the database should include the Commission's preparation of functional specifications and an independent audit report.

(70) Certain artificial intelligence systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or fraud, regardless of whether they qualify as high risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations, without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and context of use. Furthermore, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorization system. Such information and notices should be made available in formats accessible to persons with disabilities. Additionally, users who use an AI system to generate or manipulate image, audio, or video content that is substantially similar to existing people, places, or events and that would falsely appear to a person to be authentic should disclose that the content has been artificially created or manipulated by labeling the AI ​​output accordingly and revealing its artificial origin.

(71) Artificial intelligence is a rapidly evolving family of technologies that requires new forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and the integration of appropriate safeguards and risk mitigation measures. In order to ensure an innovation-friendly, future-proof and disruption-proof legal framework, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are brought to market or otherwise put into use.

(72)The objectives of the regulatory sandboxes should be to promote AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phases to ensure that the innovative AI systems comply with this Regulation and other relevant EU and Member State legislation; to increase legal certainty for innovators and competent authorities' oversight and understanding of the opportunities, emerging risks and impacts of AI use and to speed up access to markets, including by removing barriers for small and medium-sized enterprises (SMEs) and start-ups. In order to ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the implementation of the regulatory sandboxes and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for the development of certain AI systems in the public interest within the AI ​​legislative sandbox in accordance with Article 6(1). 4, of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4, paragraph 2, in Directive (EU) 2016/680. Participants in the Sandbox should ensure appropriate safeguards and cooperation with the competent authorities, including by following their guidance and acting promptly and in good faith to mitigate any high risk to security and fundamental rights that may arise during the development and experimentation of the Sandbox. The behavior of the participants in the sandbox should be taken into account when the competent authorities decide whether to impose an administrative fine in accordance with Article 83(1). 2, of Regulation 2016/679 and Article 57 of Directive 2016/680.

(73) In order to promote and protect innovation, it is important that special consideration is given to small providers and users of AI systems. To this end, Member States should develop initiatives aimed at these operators, including awareness-raising and information communication. In addition, the specific interests and needs of small providers must be taken into account when notified bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities can represent a significant cost for providers and other operators, especially the costs for a smaller scale. Member States should, where appropriate, ensure that one of the languages ​​they determine and accept for relevant providers' documentation and for communication with operators is one of the languages ​​widely understood by the largest possible number of cross-border users.

(74)In order to minimize the risks to the implementation due to the lack of knowledge and expertise in the market and to facilitate the compliance of providers and notified bodies with their obligations under this Regulation, the AI-on-demand platform, the European Digital Innovation Hubs and test- and the experimental facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and areas of competence, they can in particular provide technical and scientific support to providers and authorized bodies.

(75) It is appropriate for the Commission to facilitate, as far as possible, access to testing and trial facilities for bodies, groups or laboratories established or accredited under any relevant Union harmonization legislation and carrying out tasks related to conformity assessment of products or equipment covered by the relevant EU harmonization legislation. This is especially the case for expert panels, expert laboratories and reference laboratories in the field of medical devices according to Regulation (EU) 2017/745 and Regulation (EU) 2017/746.

(76) In order to facilitate a smooth, efficient and harmonized implementation of this Regulation, a European Artificial Intelligence Council should be established. The board should be responsible for a number of advisory tasks, including giving opinions, recommendations, advice or guidance on matters relating to the implementation of this Regulation, including on technical specifications or existing standards relating to the requirements set out in this Regulation and providing advice to and assist the Commission with specific issues related to artificial intelligence.

(77) Member States play a key role in the application and enforcement of this Regulation. In this regard, each Member State should designate one or more national competent authorities in order to supervise the application and implementation of this Regulation. In order to increase organizational efficiency on the part of Member States and to establish an official point of contact for the public and other counterparts at Member State and EU level, a national authority should be designated in each Member State as national supervisory authority.

(78)In order to ensure that providers of high-risk AI systems can take into account the experiences of the use of high-risk AI systems to improve their systems and the design and development process or can take any possible corrective action in a timely manner, all providers have a post-market monitoring system in place. This system is also key to ensuring that the potential risks arising from AI systems that continue to 'learn' after being brought to market or deployed can be managed more effectively and in a timely manner. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or breaches of national and EU law protecting fundamental rights resulting from the use of their AI systems .

(79)In order to ensure the appropriate and effective enforcement of the requirements and obligations set out in this Regulation, which is Union harmonization legislation, the market surveillance and product compliance system established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies overseeing the application of EU law for the protection of fundamental rights, including equality bodies, should also have access to all documentation created under this Regulation .

(80) EU financial services legislation includes internal governance and risk management rules and requirements that apply to regulated financial institutions in relation to the provision of those services, including when they make use of AI systems. In order to ensure the consistent application and enforcement of the obligations under this Regulation and relevant rules and requirements of EU financial services legislation, the authorities responsible for the supervision and enforcement of financial services legislation, including in given case the European Central Bank, designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. In order to further increase the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56, it is also appropriate to integrate the conformity assessment procedure and some of the providers' procedural obligations in relation to risk management, post-market surveillance and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited exceptions should also be considered in relation to the providers' quality management system and the monitoring obligation imposed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU.

(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a greater deployment of credible artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to develop codes of conduct to promote the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply, on a voluntary basis, additional requirements related to, for example, environmental sustainability, accessibility for people with disabilities, stakeholder participation in the design and development of AI systems, and diversity of development teams. The Commission can develop initiatives, including of a sector-specific nature, to facilitate the lowering of technical barriers that hinder the cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability between different types of data.

(82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and are therefore not required to comply with its requirements are nevertheless secure when brought into revenue or put into use. To contribute to this goal, Directive 2001/95/EC of the European Parliament and of the Council57would act as a safety net.

(83) In order to ensure trustful and constructive cooperation between competent authorities at Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained during the implementation of their tasks.

(84)Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by providing for effective, proportionate and dissuasive penalties for infringements thereof. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection Supervisor should have the power to impose fines on EU institutions, agencies and bodies falling within the scope of this Regulation.

(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I, to define AI systems. The EU harmonization legislation listed in Annex II, the high-risk AI systems in Annex III, the provisions on technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions on the conformity assessment procedures in Annexes VI and VII and the provisions on the determination of high-risk AI -systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission conducts appropriate consultations during its preparatory work, including at expert level, and that these consultations are conducted in accordance with the principles of the Interinstitutional Agreement of 13 April 2016 on Better Law-Making.58. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts have systematic access to meetings of the Commission's expert groups dealing with the preparation of delegated acts.

(86) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. These powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council59.

(87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality, cf. the said article, this regulation does not go beyond what is necessary to achieve this objective.

(88) This Regulation should apply from … [OP – please insert the date in Art. 85]. However, the infrastructure related to the management and the conformity assessment system should be operational before that date and therefore the provisions on authorized bodies and management structure should apply from …[OP - please insert the date - three months after the entry into force of this Regulation]. In addition, Member States should lay down and notify the Commission the rules on sanctions, including administrative fines, and ensure that they are correctly and effectively implemented on the date of application of this Regulation. Therefore, the provisions on penalties should apply from [OP - please insert the date - twelve months after the entry into force of this Regulation].

(89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42 para. 2, in Regulation (EU) 2018/1725 and issued an opinion on [...]".

HAVE ADOPTED THIS REGULATION:

SECTION I. GENERAL PROVISIONS

Article 1. Object

This regulation provides:

(a) harmonized rules for the marketing, deployment and use of artificial intelligence systems ("AI systems") in the Union

a) prohibition of certain forms of artificial intelligence

(b) specific requirements for high-risk AI systems and obligations for operators of such systems

(c) harmonized transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorization systems and AI systems used to generate or manipulate image, audio or video content;

d) rules on market surveillance and monitoring.

Article 2. Scope

1. This regulation applies to:

a)providers marketing or deploying AI systems in the Union, regardless of whether these providers are established in the Union or in a third country

(b)users of artificial intelligence systems located in the Union

(c)providers and users of artificial intelligence systems located in a third country where the output produced by the system is used in the Union

2.For high-risk AI systems that are security components of products or systems, or that are themselves products or systems that fall within the scope of the following legal acts, only Article 84 of this Regulation shall apply:

(a) Regulation (EC) 300/2008;

(b) Regulation (EU) No 167/2013;

(c) Regulation (EU) No 168/2013;

d) Directive 2014/90/EU;

(e) Directive (EU) 2016/797;

(f) Regulation (EU) 2018/858;

(g) Regulation (EU) 2018/1139;

(h) Regulation (EU) 2019/2144.

3. This Regulation shall not apply to AI systems developed or used exclusively for military purposes.

4. This regulation does not apply to public authorities in a third country or to international organizations that fall within the scope of this regulation according to subsection 1, where these authorities or organizations use AI systems within the framework of international agreements on law enforcement and judicial cooperation with the Union or with one or more Member States.

5. This Regulation does not affect the application of the provisions on the liability of intermediary service providers in Chapter II, Title IV, of Directive 2000/31/EC of the European Parliament and of the Council60[which are replaced by the corresponding provisions of the Digital Services Act].

Article 3. Definitions

In this regulation, the following definitions apply:

1) "artificial intelligence system" (AI system) means software developed using one or more of the techniques and approaches listed in Annex I which, for a given set of human-defined objectives, can generate outputs such as content, predictions , recommendations or decisions affecting the environments they interact with;

(1) "provider": a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into use in its own name or trademark, whether paid or free;

3) "small provider": a provider that is a micro or small enterprise within the meaning of Commission Recommendation 2003/361/EC61;

(4) "user" means any natural or legal person, public authority, agency or other body that uses an AI system under its authority, except where the AI ​​system is used in connection with a personal non-professional activity

5) "authorised representative" means any natural or legal person established in the Union who has received a written authorization from a provider of an AI system to carry out and carry out on its behalf, respectively, the obligations and procedures set out in this Regulation;

6) "importer" means any natural or legal person established in the Union who markets or puts into use an AI system bearing the name or trademark of a natural or legal person established outside the Union;

7) "distributor": any natural or legal person in the supply chain, other than the provider or importer, who makes an AI system available on the EU market without affecting its characteristics;

(8) "operator": the provider, the user, the authorized representative, the importer and the distributor

9) "marketing": the first making available of an AI system on the EU market

10)"making available on the market" means any supply of an AI system for distribution or use on the Union market in connection with a commercial activity, whether for payment or free of charge;

11) "putting into service" means the supply of an artificial intelligence system for first use directly to the user or for own use on the EU market for the intended purpose;

(12) "intended purpose": the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information provided by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;

(13) "reasonably foreseeable misuse" means the use of an AI system in a manner inconsistent with its intended purpose, but which may result from reasonably foreseeable human behavior or interaction with other systems;

(14)"safety component of a product or system": a component of a product or system which performs a safety function for that product or system, or whose failure or malfunction endangers the health and safety of persons or property

(15) "instructions for use": information provided by the provider to inform the user in particular of an AI system's intended purpose and proper use, including the specific geographic, behavioral or functional setting within which the high-risk AI system is intended to be used;

16) "revocation of an AI system": any measure aimed at obtaining the return to the provider of an AI system made available to users

17)"withdrawal of an AI system" means any measure aimed at preventing the distribution, display and offering of an AI system;

(18) "performance of an AI system" means the ability of an AI system to achieve its intended purpose

(19)"authorizing authority": the national authority responsible for establishing and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring;

20) "conformity assessment" means the process of verifying whether the requirements of Title III, Chapter 2 of this Regulation relating to an AI system have been met;

(21) "conformity assessment body" means a body that carries out third-party conformity assessment activities, including testing, certification and inspection;

(22) "notified body" means a conformity assessment body designated in accordance with this Regulation and other relevant Union harmonization legislation;

(23) "substantial change" means a change to the AI ​​system after it has been placed on the market or put into service, which affects the compliance of the AI ​​system with the requirements of Title III, Chapter 2 of this Regulation, or results in a change of the intended purpose for which the AI ​​system has been assessed to;

(24) "CE conformity marking" (CE marking) means a marking by which a provider indicates that an AI system complies with the requirements of Title III, Chapter 2 of this Regulation and other applicable EU legislation harmonizing the conditions for the marketing of products ("Union Harmonization Legislation") that ensures their application;

25) "post-marketing monitoring": all activities carried out by providers of AI systems to proactively collect and review experience from the use of AI systems they market or deploy with the aim of identifying a need for immediate apply any necessary corrective or preventive actions;

26)"market surveillance authority" means the national authority that carries out the activities and takes the measures under Regulation (EU) 2019/1020

27) "harmonised standard": a European standard as defined in Article 2(2); 1, letter c) of Regulation (EU) No. 1025/2012

(28)"common specifications": a document, other than a standard, which contains technical solutions allowing compliance with certain requirements and obligations set out in this Regulation

(29) "training data" means data used to train an artificial intelligence system by adapting its learnable parameters, including the weights of a neural network;

30) "validation data" means data used to provide an evaluation of the trained artificial intelligence system and to adjust its non-teachable parameters and its learning process, including to prevent overfitting, referring to the fact that the validation dataset may be a separate data set or part of the training data set, either as a fixed or variable partition;

31) "test data" means data used to provide an independent evaluation of the trained and validated AI system to confirm the expected performance of that system before it is placed on the market or put into use;

(32)"input data" means data provided to or directly acquired by an AI system on the basis of which the system produces an output;

(33)"biometric data": personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person which enable or confirm the unique identification of the natural person, such as facial images or fingerprint data;

(Video) Towards and EU Regulatory Framework for AI Explainability

(34)"emotion recognition system" means an artificial intelligence system for the purpose of identifying or inferring the emotions or intentions of natural persons on the basis of their biometric data;

(35)"biometric categorization system" means an artificial intelligence system intended to assign natural persons to specific categories, such as gender, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data;

36)"remote biometric identification system": an AI system with the purpose of identifying natural persons at a distance by comparing a person's biometric data with the biometric data contained in a reference database and without prior knowledge of the user of the AI ​​system whether the person will be present and identifiable;

(37) "real-time" remote biometric identification system: a remote biometric identification system in which the registration of biometric data, the comparison and the identification all occur without significant delay. This includes not only immediate identification, but also limited short delays to avoid circumvention.

38) "post" biometric remote identification system": a biometric remote identification system, where there is no "real-time" biometric remote identification system;

(39)"publicly accessible space" means any physical place accessible to the public, whether or not certain conditions of access may apply;

(40) "law enforcement authority" means:

a) any public authority competent to prevent, investigate, detect or prosecute criminal offenses or enforce criminal sanctions, including safeguarding against and prevention of threats to public safety; or

b) any other body or entity entrusted under the law of the Member State with the exercise of public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offenses or the enforcement of criminal sanctions, including protection against and prevention of threats to public safety;

41) "law enforcement" means activities carried out by law enforcement authorities for the prevention, investigation, detection or prosecution of criminal offenses or the enforcement of criminal sanctions, including protection against and prevention of threats to public safety;

42) "national regulatory authority": the authority to which a Member State assigns responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the common contact point for the Commission, and for to represent the Member State in the European Artificial Intelligence Board;

(43)"national competent authority" means the national regulatory authority, the authorizing authority and the market surveillance authority;

(44) "serious incident" means any incident that directly or indirectly leads to, may have led to, or may lead to any of the following:

a) death of a person or serious damage to a person's health, property or the environment

b) a serious and irreversible disruption of the management and operation of critical infrastructure.

Article 4. Amendments to Annex I

The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I to update the list for market and technological developments based on characteristics similar to the techniques and approaches listed therein.

SECTION II. PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES

Article 5

1. The following artificial intelligence practices are prohibited:

(a) the marketing, deployment or use of an artificial intelligence system that uses subliminal techniques beyond a person's consciousness to significantly distort a person's behavior in a way that causes or is likely to cause that person or another person physical or psychological harm;

(b) the marketing, deployment or use of an artificial intelligence system that exploits the vulnerabilities of a specific group due to their age, physical or mental disability, in order to distort the behavior of a person significantly relating to that group in a way that causes or is likely to will cause that person or another person physical or mental harm;

c) the marketing, deployment or use of AI systems by public authorities or on their behalf to evaluate or classify the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, where the social score leads to a of or both of the following:

(i) harmful or unfavorable treatment of certain natural persons or entire groups thereof in social contexts that are not related to the contexts in which the data were originally generated or collected;

(ii) harmful or unfavorable treatment of certain natural persons or whole groups thereof, which is unjustified or disproportionate to their social behavior or its seriousness;

d) the use of "real-time" biometric remote identification systems in publicly accessible spaces for law enforcement purposes, unless and to the extent that such use is strictly necessary for one of the following purposes:

(i) the targeted search for specific potential victims of crime, including missing children

ii) prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;

iii) discovery, location, identification or prosecution of a perpetrator or suspect of a criminal offense as referred to in Article 2, paragraph 2, in the Council's framework decision 2002/584/RIA62and is punishable in the Member State concerned by a custodial sentence or a custodial sentence for a maximum period of at least three years, as determined by the law of that Member State.

2. The use of "real-time" biometric remote identification systems in publicly accessible spaces for the purpose of law enforcement for any of the purposes mentioned in paragraph 1, letter d), must take into account the following elements:

a) the nature of the situation that gives rise to the possible use, in particular the seriousness, the probability and the extent of the damage caused if the system was not used

b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, the likelihood and the extent of these consequences.

In addition, the use of "real-time" remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the 1, letter d), mentioned purpose comply with necessary and proportionate guarantees and conditions in relation to the use, especially with regard to the temporal, geographical and personal limitations.

3. As regards subsection 1(d) and (2), each individual use for law enforcement purposes of a 'real-time' remote biometric identification system in publicly accessible spaces shall be subject to a prior authorization issued by a judicial authority or by an independent administrative authority of the Member State where the use must take place, issued following a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. In a duly justified emergency situation, use of the system can be started without authorization, and authorization can only be requested during or after use.

The competent judicial or administrative authority shall issue the authorization only if, on the basis of objective evidence or clear indications submitted to it, it is satisfied that the use of the "real-time" remote biometric identification system in question is necessary and proportionate to achieve one of the objectives in subsection 1, letter d), as stated in the request. When deciding the request, the competent judicial or administrative authority must take into account the elements mentioned in paragraph

4.A Member State may decide to allow the use of "real-time" remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement within the limits and under the conditions set out in paragraph 4, in whole or in part. letter d), 2 and 3. This Member State lays down in its national legislation the necessary detailed rules for the request, the issuance and exercise of, as well as the supervision of, those in subsection 3 permits. These rules must also state which of the goals in subsection 1, letter d), including which of the criminal acts mentioned in point iii), the competent authorities may be authorized to use these systems for law enforcement purposes. .

SECTION III. HIGH RISK AI SYSTEMS

Chapter 1. CLASSIFICATION OF AI SYSTEMS AS HIGH RISK

Article 6. Classification rules for high-risk AI systems

1. Regardless of whether an AI system is placed on the market or put into use independently of the products referred to in points a) and b), this AI system must be considered high risk if both of the following conditions are met:

a) the AI ​​system is intended to be used as a safety component in a product, or is itself a product covered by EU harmonization legislation listed in Annex II

b) the product whose safety component is the AI ​​system or the AI ​​system itself as a product must undergo a third-party conformity assessment for the purpose of placing on the market or putting into service that product in accordance with the Union harmonization legislation listed in Annex II.

2. In addition to the high-risk AI systems referred to in subsection 1, AI systems referred to in Annex III shall also be considered high risk.

Article 7. Amendments to Annex III

1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are met:

(a) the AI ​​systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;

(b) the AI ​​systems pose a risk of harm to health and safety or a risk of adverse impact on fundamental rights, that is, in terms of its severity and likelihood of occurrence, equal to or greater than the risk of harm or of the adverse impact posed by the high-risk AI systems already mentioned in Annex III.

2. When assessing for the purposes of subsection 1, whether an AI system poses a risk of harm to health and safety or a risk of a negative impact on fundamental rights that is equal to or greater than the risk of harm caused by the high-risk AI systems already mentioned in Annex III, the Commission takes into account the following criteria:

(a) the intended purpose of the AI ​​System;

(b) the extent to which an AI system has been used or is likely to be used

(c) the extent to which the use of an AI system has already caused harm to health and safety or an adverse impact on fundamental rights, or has given rise to significant concerns regarding the materialization of such harm or adverse impact; , as demonstrated by reports or documented allegations submitted to national competent authorities;

d) the potential extent of such harm or adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;

(e) the extent to which potentially harmed or adversely affected persons are dependent on the result produced by an AI system, in particular because it is not reasonably possible for practical or legal reasons to opt out of that result;

(f) the extent to which potentially harmed or adversely affected persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances or age

(g) the extent to which the result produced by an AI system is readily reversible, whereby results that have an impact on the health or safety of persons shall not be considered readily reversible;

h) to what extent existing EU legislation allows for:

(i) effective compensation measures in relation to the risks posed by an AI system, excluding claims for damages;

(ii) effective measures to prevent or substantially minimize those risks.

Chapter 2. REQUIREMENTS FOR HIGH-RISK AI SYSTEMS

Article 8. Compliance with the requirements

1. High-risk AI systems must meet the requirements of this chapter.

2. The intended purpose of the high-risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with these requirements.

Article 9. Risk management system

1. A risk management system must be established, implemented, documented and maintained in relation to high-risk AI systems.

2. The risk management system must consist of a continuous iterative process that runs throughout the life cycle of a high-risk AI system and that requires regular systematic updating. It must include the following steps:

a)identification and analysis of the known and foreseeable risks associated with each high-risk AI system

b) estimating and evaluating the risks that may arise when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;

c) evaluation of other possible risks based on the analysis of data collected from the post-marketing surveillance system referred to in Article 61

d)adopting appropriate risk management measures in accordance with the provisions of the following paragraphs.

3. The risk management measures referred to in subsection 2(d) shall take due account of the effects and possible interactions resulting from the combined application of the requirements of this Chapter 2. They shall take into account the generally recognized state of affairs. of the technique, including as reflected in relevant harmonized standards or common specifications.

4. The risk management measures referred to in subsection 2(d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of high-risk AI systems is considered acceptable, provided that the high-risk AI system is used in accordance with its intended purpose purpose or under conditions of reasonably foreseeable abuse. These remaining risks must be communicated to the user.

In identifying the most appropriate risk management measures, the following must be ensured:

a)elimination or reduction of risks as far as possible through appropriate design and development

(b) where appropriate, the implementation of appropriate mitigation and control measures in relation to risks that cannot be eliminated;

c) providing adequate information in accordance with Article 13, in particular with regard to the risks referred to in paragraph 1 of this Article. 2, letter b), and, where relevant, training of users.

In eliminating or reducing risks associated with the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training that can be expected of the user and the environment in which the system is intended to be used. .

5.High-risk AI systems must be tested with the aim of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems function consistently for their intended purpose and that they comply with the requirements of this chapter.

6.Testing procedures must be suitable for achieving the intended purpose of the AI ​​system and need not go beyond what is necessary to achieve that purpose.

7. Testing of high-risk AI systems shall be carried out, as appropriate, at any time throughout the development process and in any case before marketing or deployment. Testing must be performed against pre-defined metrics and probability thresholds appropriate for the intended purpose of the high-risk AI system.

8. When implementing the risk management system described in subsection 1 to 7, specific consideration must be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.

9. For credit institutions regulated by Directive 2013/36/EU, the aspects described in subsection 1-8, be part of the risk management procedures established by these institutions pursuant to Article 74 of the said Directive.

Article 10. Data and data management

1. High-risk AI systems that make use of techniques that involve training models with data must be developed on the basis of training, validation and test datasets that meet the quality criteria in para. 2-5.

2. Training, validation and test datasets must be subject to appropriate data governance and management practices. This practice particularly concerns

(a) of relevant designvalg

(b) data collection;

(c)relevant data preparation processing operations, such as annotation, labeling, cleaning, enrichment and aggregation

(d) formulation of relevant assumptions, particularly with respect to the information that the data are intended to measure and represent

e) a prior assessment of the availability, quantity and suitability of the data sets that are needed

(f) examination in light of possible biases;

g) identification of any data gaps or deficiencies and how these gaps and deficiencies can be remedied.

3. Training, validation and test datasets must be relevant, representative, error-free and complete. They must have the relevant statistical characteristics, including, where relevant, with respect to the individuals or groups of individuals on whom the high-risk AI system is intended to be used. These characteristics of the datasets can be met at the level of individual datasets or a combination thereof.

4. Training, validation, and testing datasets shall, to the extent required by the intended purpose, take into account the characteristics or elements specific to the specific geographic, behavioral, or functional setting within which the high-risk AI system is intended to be used.

5. To the extent that it is strictly necessary in order to ensure bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data as referred to in Article 9, paragraph . in Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10, paragraph 1, of Regulation (EU) 2018/1725, subject to appropriate guarantees for the fundamental rights and freedoms of natural persons, including technical restrictions on re-use and the use of advanced security and privacy-preserving measures, such as pseudonymisation or encryption, where anonymisation can significantly affect the intended purpose.

6.Appropriate data management and governance practices apply to the development of high-risk AI systems, other than those that use techniques involving training models, to ensure that those high-risk AI systems comply with paragraph

Article 11. Technical documentation

1. The technical documentation for a high-risk AI system must be prepared before the system is marketed or put into use, and it must be kept up to date.

The technical documentation must be prepared in such a way as to demonstrate that the high-risk AI system meets the requirements of this chapter and provide the national competent authorities and authorized bodies with all the information necessary to assess whether the AI ​​system's compliance with the requirements. It must contain at least the elements listed in Annex IV.

2.If a high-risk AI system related to a product to which the legal acts listed in Annex II, Section A apply, is placed on the market or put into use, a single technical documentation must be drawn up containing all information. listed in Annex IV as well as the information required under these acts.

3. The Commission shall be empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical developments, the technical documentation provides all the information necessary to assess the system's conformity with the requirements listed in this chapter.

Article 12. Record keeping

1. High-risk AI systems must be designed and developed with features that enable automatic recording of events ("logs") while the high-risk AI systems are in operation. These logging functions must conform to recognized standards or common specifications.

2. The logging capabilities must ensure a level of traceability of the operation of the AI ​​system throughout its life cycle that is appropriate for the intended purpose of the system.

3.In particular, logging capabilities must enable the monitoring of the operation of the high-risk AI system with regard to the occurrence of situations that may result in the AI ​​system posing a risk according to Article 65, paragraph a significant change and facilitate post-marketing surveillance as referred to in Article 61.

4.For high-risk AI systems as referred to in point 1(a) of Annex III, the logging capacity shall at least provide:

(a) record the period of each use of the System (start date and time and end date and time of each use);

b) the reference database against which the input data has been checked by the system

(c) the input data for which the search has resulted in a match;

d) identification of the natural persons involved in the control of the results, cf. Article 14, paragraph

Article 13. Transparency and provision of information to users

1. High-risk AI systems must be designed and developed in such a way as to ensure that their operation is sufficiently transparent for users to interpret the system's output and use it properly. An appropriate type and degree of transparency must be ensured in order to achieve compliance with the relevant obligations of the user and the provider as described in Chapter 3 of this section.

2. High-risk AI systems must be accompanied by instructions for use in an appropriate digital format or otherwise, which include concise, complete, correct and clear information that is relevant, accessible and understandable to users.

3. Those in subsection 2 mentioned information must specify:

a) the identity and contact details of the provider and, where relevant, its authorized representative

(b) characteristics, capabilities and limitations of the performance of the high-risk AI system, including:

(i) its intended purpose;

ii) the level of accuracy, robustness and cybersecurity, as referred to in Article 15, against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on the expected level of accuracy, robustness and cyber security;

(iii) any known or foreseeable circumstance related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable abuse which may lead to risks to health and safety or fundamental rights;

iv) its performance in relation to the persons or groups of persons on whom the system is intended to be used

(v)where applicable, input data specifications or any other relevant information regarding the training, validation and test datasets used, taking into account the intended purpose of the AI ​​system.

(c) any changes to the high-risk AI system and its performance that have been predetermined by the provider at the time of the initial conformity assessment;

d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate users' interpretation of the output of AI systems;

e) the expected lifetime of the high-risk AI system and any necessary maintenance and care measures to ensure that this AI system functions properly, including with respect to software updates.

Article 14. Human supervision

1. High-risk AI systems must be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively monitored by natural persons during the period the AI ​​system is in use.

2. Human supervision shall aim to prevent or minimize the risks to health, safety or fundamental rights that may arise when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable abuse, in particular when such risks continues regardless of the application of other requirements set forth in this chapter.

3. Human supervision must be ensured through either one or all of the following measures:

a) identified and incorporated, when technically possible, into the high-risk AI system by the provider before it is marketed or put into use

b) identified by the provider before the high-risk AI system is marketed or put into use, and which is suitable to be implemented by the user.

4. The measures mentioned in subsection 3, must enable the persons to whom the human supervision is assigned to do the following, depending on the circumstances:

(a) fully understand the capabilities and limitations of the high-risk AI system and be able to properly monitor its operation so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;

b) remain aware of the possible tendency to automatically rely on or over-rely on outputs produced by a high-risk AI system ("automation bias"), particularly for high-risk AI systems used to provide information or recommendations for decisions to be made of natural persons;

(c) be able to correctly interpret the output of the high-risk AI system, taking into account in particular the characteristics of the system and the available interpretation tools and methods

(d) could decide in any particular situation not to use the High Risk AI System or otherwise disregard, override or reverse the output of the High Risk AI System;

(e) be able to interfere with the operation of the high-risk AI system or interrupt the system through a "stop" button or similar procedure.

5. For high-risk AI systems as referred to in point 1, letter a) of Annex III, the measures referred to in paragraph of the identification resulting from the system, unless this has been verified and confirmed by at least two natural persons.

Article 15. Accuracy, robustness and cyber security

1.High-risk AI systems must be designed and developed in such a way that, in light of their intended purpose, they achieve an appropriate level of accuracy, robustness and cyber security and perform consistently in these respects throughout their life cycle.

2. The accuracy levels and relevant accuracy metrics for high-risk AI systems shall be specified in the accompanying instructions for use.

3.High-risk AI systems must be resilient to errors, mistakes, or inconsistencies that may occur in the system or the environment in which the system operates, especially due to their interaction with natural persons or other systems.

The robustness of high-risk AI systems can be achieved through technical redundancy solutions, which may include backup or fail-safe plans.

High-risk AI systems that continue to learn after being commercialized or deployed must be developed in such a way as to ensure that any biased outputs due to outputs being used as input to future operations ( "feedback loops") are duly addressed with appropriate countermeasures.

4. High-risk AI systems must be resistant to attempts by unauthorized third parties to alter their use or performance by exploiting system vulnerabilities.

The technical solutions aimed at ensuring the cyber security of high-risk AI systems must be appropriate to the relevant circumstances and risks.

The technical solutions to address AI-specific vulnerabilities must include, where appropriate, measures to prevent and control attacks that attempt to manipulate the training dataset ("data poisoning"), inputs designed to cause the model to make an error (" contradictory examples"). , or model error.

Chapter 3. OBLIGATIONS FOR PROVIDERS AND USERS OF HIGH-RISK AI SYSTEMS AND OTHER PARTIES

Article 16. Obligations of providers of high-risk AI systems

Providers of high-risk AI systems must:

a) ensure that their high-risk AI systems comply with the requirements of Chapter 2 of this section

(b) have a quality management system which complies with Article 17

c) prepare the technical documentation for the high-risk AI system

(d) when under their control, retain the logs automatically generated by their high-risk AI systems;

e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure before it is placed on the market or put into service;

f)meet the registration obligations in Article 51;

g) take the necessary corrective measures if the high-risk AI system does not comply with the requirements of Chapter 2 of this section

h)inform the national competent authorities of the Member States where they have made available or put into use the AI ​​system and, where applicable, the notified body of the non-compliance and of any corrective measures taken;

(i) to affix the CE marking to their high-risk AI systems to indicate compliance with this Regulation in accordance with Article 49

j) upon request by a national competent authority, demonstrate that the high-risk AI system complies with the requirements of Chapter 2 of this section.

Article 17. Quality management system

1. Providers of high-risk AI systems must implement a quality management system that ensures compliance with this Regulation. This system must be documented in a systematic and transparent manner in the form of written policies, procedures and instructions and must include at least the following aspects:

a) a strategy for legal compliance, including compliance with conformity assessment procedures and procedures for managing changes to the high-risk AI system;

b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;

c) techniques, procedures and systematic actions to be used for development, quality control and quality assurance of the high-risk AI system

d) examination, testing and validation procedures to be performed before, during and after the development of the high-risk AI system and the frequency with which they are to be performed

e) technical specifications, including standards to be applied and, where the relevant harmonized standards are not fully applied, the means to be used to ensure that the high-risk AI system meets the requirements of Chapter 2 of this Title;

(f) data management systems and procedures, including data collection, data analysis, data tagging, data storage, data filtering, data mining, data aggregation, data storage and any other operation relating to data performed prior to and for the purpose of marketing or deploying high-risk AI systems;

g) the risk management system referred to in Article 9

(h) establishing, implementing and maintaining a post-market surveillance system in accordance with Article 61

(i) procedures for reporting serious incidents and malfunctions in accordance with Article 62

j) the handling of communications with national competent authorities, competent authorities, including sector-specific, providing or supporting access to data, authorized bodies, other operators, customers or other interested parties

k) systems and procedures for recording all relevant documentation and information

l) resource management, including measures related to security of supply

(m) an accountability framework setting out the responsibilities of management and other staff in respect of all aspects set out in this paragraph.

2. The implementation of aspects referred to in paragraph 1 must be proportionate to the size of the provider's organization.

3. For providers that are credit institutions regulated by Directive 2013/36/EU, the obligation to introduce a quality management system must be considered fulfilled by complying with the rules on internal management arrangements, processes and mechanisms in accordance with Article 74 of the said Directive. In this context, all harmonized standards referred to in Article 40 of this Regulation shall be taken into account.

§ 18. Duty to prepare technical documentation

1.Providers of high-risk AI systems shall prepare the technical documentation referred to in Article 11 in accordance with Annex IV.

2. Providers that are credit institutions regulated by Directive 2013/36/EU must keep the technical documentation as part of the documentation regarding internal management, arrangements, processes and mechanisms in accordance with Article 74 of the said Directive.

Article 19. Conformity assessment

1. Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43 before they are placed on the market or put into service. If the conformity of the AI ​​systems with the requirements of Chapter 2 of this section has been demonstrated after this conformity assessment, providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE conformity marking in accordance with Article 49.

2. For high-risk AI systems as referred to in point 5(b) of Annex III, which are marketed or put into use by providers that are credit institutions regulated by Directive 2013/36/EU, the conformity assessment must be carried out as part of the procedure in Article 97 -101 in said directive.

Article 20. Automatically Generated Log Files

1. Providers of high-risk AI systems must retain the logs automatically generated by their high-risk AI systems to the extent that such logs are under their control by virtue of a contractual agreement with the user or otherwise by law. The logs must be kept for a period of time that is appropriate in light of the intended purpose of the high-risk AI system and applicable legal obligations under EU or national law.

(Video) Workshop: Proposal for an ILO Policy Observatory on Work in the Digital Economy

2.Providers that are credit institutions regulated by Directive 2013/36/EU must maintain the logs automatically generated by their high-risk AI systems as part of the documentation pursuant to Article 74 of the said Directive.

Article 21. Corrective Actions

Providers of high-risk AI systems that believe or have reason to believe that a high-risk AI system that they have placed on the market or put into service does not comply with this Regulation shall immediately take the necessary corrective measures to to put this system into use. accordingly, withdraw or revoke it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the authorized representative and the importers.

Article 22. Obligation to provide information

If the high-risk AI system poses a risk according to Article 65, para. 1, and this risk is known to the provider of the system, that provider shall immediately notify the national competent authorities of the Member States where it has made the system available and, where applicable, the notified body that has issued a high risk certificate The AI ​​system, in particular for non-compliance and any corrective measures.

Article 23. Cooperation with competent authorities

Providers of high-risk AI systems shall, at the request of a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the compliance of the high-risk AI system with the requirements of Chapter 2 of this section. in an official EU language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers must also provide that authority with access to the logs automatically generated by the high-risk AI system, to the extent that such logs are under their control by virtue of a contractual agreement with the user or otherwise by law.

Article 24. Obligations of product manufacturers

If a high-risk AI system related to products to which the legal acts listed in Annex II, Section A apply, is placed on the market or put into service together with the product manufactured in accordance with those legal acts and under the name of the product manufacturer, the manufacturer of ​the product assumes responsibility for the AI ​​system's compliance with this regulation and, as far as the AI ​​system is concerned, has the same obligations as this regulation imposes on the provider.

Article 25. Authorized representatives

1. Before providers established outside the Union make their systems available on the EU market, where an importer cannot be identified, they must, by written power of attorney, appoint an authorized representative established in the Union.

2. The authorized representative must perform the tasks specified in the mandate received from the provider. The mandate authorizes the authorized representative to perform the following tasks:

a) keep a copy of the EU declaration of conformity and the technical documentation available to the national competent authorities and national authorities, cf. Article 63, paragraph

b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate that a high-risk AI system complies with the requirements of Chapter 2 of this section, including access to the log files that automatically generated by high-risk AI system to the extent such logs are under the control of the Provider by virtue of a contractual agreement with the User or otherwise by law;

c) cooperate with competent national authorities following a reasoned request on any action taken by the latter in relation to the high-risk AI system.

Article 26. Obligations of importers

1. Before placing such a high-risk system on the market, importers must ensure that:

a)the appropriate conformity assessment procedure has been carried out by the provider of the AI ​​system in question

b) the provider has prepared the technical documentation in accordance with Annex IV

c) the system bears the required conformity marking and is accompanied by the required documentation and instructions for use.

2.If an importer finds or has reason to believe that a high-risk AI system is not in compliance with this Regulation, it must not place the system on the market until that AI system has been brought into compliance. If the high-risk AI system poses a risk according to Article 65, para. 1, the importer shall notify the provider of the AI ​​system and the market surveillance authorities thereof.

3.Importers must indicate their name, registered business name or registered trademark and the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as appropriate. .

4.Importers shall ensure that, even if a high-risk AI system is under their responsibility, where applicable, the storage or transport conditions shall not jeopardize its compliance with the requirements of Chapter 2 of this Section.

5.Importers shall, upon reasoned request, provide the national competent authorities with all necessary information and documentation to demonstrate that a high-risk AI system complies with the requirements of Chapter 2 of this title, in language that can be easily understood by the national competent authority , including access to the log files automatically generated by the high-risk AI system, to the extent that such log files are under the control of the provider by virtue of a contractual agreement with the user or otherwise by law. They must also cooperate with these authorities on any action taken by the national competent authority in relation to this system.

Article 27. Obligations of distributors

1. Before making a high-risk AI system available on the market, distributors must verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instructions for use, and that the provider and importer of the system, as applicable, has fulfilled the obligations of this Regulation.

2. If a distributor finds or has reason to believe that a high-risk AI system does not comply with the requirements of Chapter 2 of this section, it must not make the high-risk AI system available on the market until the system has been brought into compliance with these requirements. If the system poses a risk according to Article 65, subsection 1, the distributor must also inform the provider or importer of the system, as applicable, about this.

3.Distributors must ensure that while a high-risk AI system is under their responsibility, where applicable, the storage or transport conditions do not jeopardize the system's compliance with the requirements of chapter 2 of this section.

4.A distributor that believes or has reason to believe that a high-risk AI system that it has made available on the market does not comply with the requirements of Chapter 2 of this section shall take the necessary corrective measures to system in accordance with these requirements, withdraw or recall it or ensure that the provider, importer or any relevant operator takes these corrective measures. If the high-risk AI system poses a risk according to Article 65, para. , of non-compliance and of any corrective measures taken.

5.Upon a reasoned request from a national competent authority, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate that a high-risk system complies with the requirements of Chapter 2 of this section. . Distributors must also cooperate with the national competent authority on any action taken by that authority.

Article 28. Obligations of distributors, importers, users or any other third party

1. Any distributor, importer, user or other third party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16 in any of the following circumstances:

a) they market or adopt a high-risk AI system under their name or trademark

(b)they change the intended purpose of a high-risk AI system that has already been marketed or put into use;

(c) they make a material change to the high-risk AI system.

2. If the circumstances mentioned in subsection 1 letter b) or c) occurs, the provider that originally brought the high-risk AI system on the market or put it into use shall no longer be considered a provider for the purposes of this Regulation.

Article 29. Obligations for users of high-risk AI systems

1. Users of high-risk AI systems must use such systems in accordance with the instructions for use provided with the systems, pursuant to subsection 2 and 5.

2. The obligations in subsection 1 does not affect other user obligations under EU or national law and the discretion of the user in organizing his own resources and activities for the purpose of implementing the human supervision measures indicated by the provider.

3. Without prejudice to subsection 1, this user must, to the extent that the user exercises control over the input data, ensure that the input data is relevant in light of the intended purpose of the high-risk AI system.

4.Users must monitor the operation of the high-risk AI system based on the user manual. When they have reason to believe that the use in accordance with the instructions for use may result in the AI ​​system posing a risk in accordance with Article 65, para. 1, they must inform the provider or distributor and suspend the use of the system. They must also inform the provider or distributor when they have identified a serious incident or any malfunction in accordance with Article 62, and suspend the use of the AI ​​system. In the event that the user is unable to reach the provider, Article 62 applies accordingly.

For users who are credit institutions regulated by Directive 2013/36/EU, the monitoring obligation in the first paragraph is deemed to be fulfilled by complying with the rules on internal management arrangements, processes and mechanisms pursuant to Article 74 of the said Directive. .

5.Users of high-risk AI systems must retain the logs automatically generated by that high-risk AI system to the extent such logs are under their control. The logs must be kept for a period of time that is appropriate in light of the intended purpose of the high-risk AI system and applicable legal obligations under EU or national law.

Users who are credit institutions regulated by Directive 2013/36/EU must keep the logs as part of the documentation relating to internal management arrangements, processes and mechanisms pursuant to Article 74 of said Directive.

6. Users of high-risk AI systems must use the information in Article 13 to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where it is relevant.

Chapter 4. NOTIFYING AUTHORITIES AND NOTIFYING BODIES

Article 30. Reporting authorities

1.Each Member State shall designate or establish a notifying authority responsible for establishing and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.

2. Member States may designate a national accreditation body as referred to in Regulation (EC) No. 765/2008 as authorizing authority.

3.Notifying authorities must be established, organized and operated in such a way that no conflicts of interest with conformity assessment bodies arise and that the objectivity and impartiality of their activities are ensured.

4. The authorizing authorities must be organized in such a way that decisions regarding the notification of conformity assessment bodies are made by competent persons other than those who have carried out the assessment of these bodies.

5. Notifying authorities may not offer or provide activities carried out by conformity assessment bodies or any consultancy services on a commercial or competitive basis.

6.Notifying authorities must ensure the confidentiality of the information they obtain.

7. Notifying authorities must have a sufficient number of competent personnel at their disposal to be able to carry out their tasks correctly.

8. Notifying authorities must ensure that conformity assessments are carried out in a proportionate manner, so as to avoid unnecessary burdens on providers, and that notified bodies carry out their activities with due regard to the size of an enterprise, the sector in which it operates, its structure and the degree of ​complexity of the AI ​​system in question.

Article 31. Application by a conformity assessment body for notification

1. Conformity assessment bodies submit an application for notification to the notifying authority of the Member State in which they are established.

2. The application for notification must be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body certifying that the conformity assessment body meets the requirements of Article 33. Any valid document relating to existing designations of the applicant notified body under any other EU harmonization legislation must be added.

3.If the conformity assessment body concerned cannot produce an accreditation certificate, it shall provide the notifying authority with the necessary documentation for verification, recognition and regular monitoring of its compliance with the requirements of Article 33. For notified bodies designated under any other EU harmonization legislation, all documents and certificates associated with these designations may be used to support their designation procedure under this Regulation, as appropriate.

Article 32. Notification procedure

1. Notifying authorities can only notify conformity assessment bodies that have met the requirements of Article 33.

2. Notifying authorities notify the Commission and the other Member States using the electronic notification tool developed and managed by the Commission.

3. The notification shall include all details of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies concerned.

4. The conformity assessment body concerned may only carry out the activities of a notified body if the Commission or the other Member States do not object within one month of notification.

5.Notifying authorities inform the Commission and the other Member States of any subsequent relevant changes to the notification.

Article 33. Authorized bodies

1. Authorized bodies shall verify the conformity of high-risk AI systems in accordance with the conformity assessment procedures in Article 43.

2. Authorized bodies must meet the requirements for organization, quality management, resources and process necessary to carry out their tasks.

3. Notified bodies' organizational structure, division of responsibilities, reporting lines and operation must ensure that there is confidence in the performance and results of the conformity assessment activities carried out by the authorized bodies.

4. Notified bodies must be independent from the provider of a high-risk AI system in relation to which it performs conformity assessment activities. Notified bodies must also be independent from any other operator that has a financial interest in the high-risk AI system being assessed, as well as from any competitors of the provider.

5. Authorized bodies must be organized and run in such a way that the independence, objectivity and impartiality of their activities is ensured. Notified bodies must document and implement a structure and procedures to ensure impartiality and promote and apply the principles of impartiality throughout their organisation, staff and assessment activities.

6. Notified bodies must have documented procedures in place that ensure that their staff, committees, subsidiaries, subcontractors and any associated bodies or staff from external bodies respect the confidentiality of the information that they hold during the performance of ​conformity assessment activities, except when disclosure is required by law. The staff of authorized bodies are obliged to observe the duty of confidentiality with respect to all information obtained in the performance of their tasks under this Regulation, except in relation to the authorizing authorities of the Member State in which their activities are carried out.

7. Notified bodies must have procedures for carrying out activities that take due account of a company's size, the sector it operates in, its structure and the complexity of the AI ​​system in question.

8. Notified bodies must take out appropriate liability insurance for their conformity assessment activities, unless the Member State concerned assumes responsibility in accordance with national law or the Member State concerned is directly responsible for the conformity assessment.

9.Notified bodies must be able to carry out all the tasks assigned to them under this Regulation with the highest degree of professional integrity and the necessary competence in the specific field, regardless of whether these tasks are carried out by the notified bodies themselves or on on their behalf and under their responsibility.

10. Authorized bodies must have sufficient internal competences to be able to effectively evaluate the tasks carried out by external parties on their behalf. To this end, the notified body must at all times and for each conformity assessment procedure and each type of high-risk AI system for which they are designated, have permanent access to sufficient administrative, technical and scientific staff who possess experience and knowledge of to the relevant artificial intelligence technologies, data and data processing and to the requirements set out in Chapter 2 of this section.

11. Notified bodies shall participate in coordination activities referred to in Article 38. They shall also participate directly or be represented in European standardization organizations or ensure that they are aware of and up-to-date with respect to relevant standards.

12.Notified bodies shall make available and, upon request, all relevant documentation, including providers' documentation, to the authorizing authority referred to in Article 30 to enable it to carry out its assessments, designation, notification, monitoring and surveillance and to facilitate assessment outlined in this chapter.

Article 34. Subsidiaries of and subcontracting from authorized bodies

1. If an authorized body subcontracts specific tasks in connection with the conformity assessment or uses a subsidiary, it ensures that the subcontractor or subsidiary meets the requirements of Article 33 and informs the authorizing authority accordingly.

2. Authorized bodies assume full responsibility for the tasks carried out by subcontractors or subsidiaries, regardless of where these are established.

3. Activities may only be subcontracted or carried out by a subsidiary in agreement with the provider.

4. Authorized bodies must keep the relevant documents relating to the assessment of the subcontractor's or subsidiary's qualifications and the work they carry out under this Regulation available to the authorizing authority.

Article 35. Identification numbers and lists of authorized bodies appointed under this regulation

1. The Commission assigns authorized bodies an identification number. It assigns a single number even when a body is notified under several EU acts.

2. The Commission shall publish the list of the bodies notified under this Regulation, including the identification numbers they have been assigned and the activities for which they have been notified. The Commission ensures that the list is kept up to date.

Article 36. Changes to notices

1. If an authorizing authority suspects or has been informed that an authorized body is no longer meeting the requirements of Article 33 or that it is not fulfilling its obligations, that authority shall immediately investigate the matter with the utmost care. In this connection, it informs the concerned authorized body of the objections raised and gives it the opportunity to express its views. If the authorizing authority comes to the conclusion that the notified body's investigation no longer meets the requirements of Article 33 or that it is not fulfilling its obligations, it must limit, suspend or withdraw the notification, depending on the seriousness. the failure. It shall also immediately inform the Commission and the other Member States thereof.

2.In the event of restriction, suspension or withdrawal of notification, or if the notified body has ceased its activity, the authorizing authority shall take appropriate measures to ensure that the files of the notified body concerned are either taken over by another notified body or kept accessible to the responsible authorizing authorities at their request.

Article 37. Challenging the competence of authorized bodies

1. The Commission investigates, if necessary, all cases where there is reason to doubt whether an authorized body meets the requirements of Article 33.

2. The authorizing authority shall, upon request, provide the Commission with all relevant information regarding the notification of the relevant authorized body.

3. The Commission shall ensure that all confidential information obtained in the course of its investigations under this Article is treated confidentially.

4.If the Commission finds that an authorized body does not meet or no longer meets the requirements of Article 33, it shall adopt a reasoned decision requesting the authorizing Member State to take the necessary corrective measures, including, if necessary, withdrawal of the notification. This implementing act is adopted following the examination procedure in Article 74, paragraph

Article 38. Coordination of authorized bodies

1. The Commission shall ensure, as regards the areas covered by this Regulation, that appropriate coordination and cooperation between notified bodies active in the conformity assessment procedures for AI systems under this Regulation are established and properly operated in the form of a sectoral group of authorized bodies.

2.Member States shall ensure that the bodies they have notified participate in the work of this group, directly or by means of designated representatives.

Article 39. Conformity assessment bodies of third countries

Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorized to carry out the activities of notified bodies under this Regulation.

Chapter 5. STANDARDS, CONFORMITY ASSESSMENT, CERTIFICATES, REGISTRATION

Article 40. Harmonized standards

High-risk AI systems which comply with harmonized standards or parts thereof, the references of which have been published in the Official Journal of the European Union, are presumed to comply with the requirements of Chapter 2 of this section. to the extent that these standards cover these requirements.

Article 41. Common specifications

1. Where there are no harmonized standards as referred to in Article 40, or where the Commission considers that the relevant harmonized standards are insufficient or that there is a need to address specific security or fundamental rights concerns, the Commission may, by means of implementing acts, adopt common specifications with regard to the requirements of Chapter 2 of this section. These implementing acts are adopted following the examination procedure in Article 74, paragraph

2. When the Commission prepares the common specifications referred to in subsection 1, it collects the views of relevant bodies or expert groups established under relevant sector-specific EU legislation.

3. High-risk AI systems that comply with the common specifications referred to in paragraph 1, is presumed to be in accordance with the requirements of Chapter 2 of this section to the extent that these common specifications cover those requirements.

4. If the providers do not comply with the common specifications referred to in paragraph 1, they must justify that they have adopted technical solutions that at least correspond to this.

Article 42. Presumption of compliance with certain requirements

1. Taking into account their intended purpose, high-risk AI systems that have been trained and tested on data relating to the specific geographical, behavioral and functional contexts within which they are intended to be used shall be presumed to comply with the requirement . out of Article 10, subsection

2. High-risk AI systems that are certified or for which a declaration of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council63and whose references are published in the Official Journal of the European Union, are presumed to comply with the cybersecurity requirements of Article 15 of this Regulation, to the extent that the cybersecurity certificate or declaration of conformity or parts thereof cover those requirements.

Article 43. Conformity assessment

1. For high-risk AI systems listed in point 1 of Annex III, where the provider, by demonstrating that a high-risk AI system complies with the requirements of Chapter 2 of this section, has applied harmonized standards, cf. Article 40 or, where applicable, common specifications referred to in Article 41, the provider must follow one of the following procedures:

(a) the conformity assessment procedure based on internal control as referred to in Annex VI

b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation with the involvement of a notified body referred to in Annex VII.

Where the provider, when demonstrating the compliance of a high-risk AI system with the requirements of Chapter 2 of this section, has not applied or has only applied partially harmonized standards as referred to in Article 40, or where such harmonized standards do not exist, and common specifications referred to in Article 41 are not available, the provider must follow the conformity assessment procedure in Annex VII.

For the purpose of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be used by law enforcement authorities, immigration or asylum authorities and EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(1) shall acts as an authorized body.

2.For high-risk AI systems as referred to in points 2-8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not prescribe the involvement of an authorized body. high-risk AI systems as referred to in point 5(b) of Annex III, marketed or put into use by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Article 97 -101 in said directive.

3.For high-risk AI systems for which legal acts listed in Annex II, Section A apply, the provider shall follow the relevant conformity assessment as required by those legal acts. The requirements set out in Chapter 2 of this section apply to these high-risk AI systems and must be part of this assessment. Section 4.3., 4.4., 4.5. and point 4.6, fifth paragraph, of Annex VII also applies.

For the purposes of this assessment, authorized bodies notified under these legal acts shall be entitled to check whether the high-risk AI systems comply with the requirements of Chapter 2 of this section, provided that the compliance of the notified bodies with requirements laid down in Article 33, subsection 4, 9 and 10, have been assessed in connection with the notification procedure under these acts.

If the legal acts listed in Annex II, Section A allow the manufacturer of the product to opt out of a third-party conformity assessment, provided that this manufacturer has applied all harmonized standards covering all the relevant requirements, that manufacturer may use of this possibility only if he has also applied harmonized standards or, where applicable, common specifications referred to in Article 41 covering the requirements of Chapter 2 of this Title.

4.High-risk AI systems must undergo a new conformity assessment procedure when they are significantly modified, regardless of whether the modified system is intended to be further distributed or continues to be used by the current user.

For high-risk AI systems that continue to learn after being marketed or put into use, changes to the high-risk AI system and its performance that are predetermined by the provider at the time of the initial conformity assessment and are part of the information in the technical documentation referred to in point 2(f) of Annex IV does not constitute a significant change.

5. The Commission is empowered to adopt delegated acts in accordance with Article 73 in order to update Annex VI and Annex VII to introduce elements of the conformity assessment procedures which become necessary in the light of technical developments.

6. The Commission is given powers to adopt delegated acts to amend subsection 1 and 2 to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII, or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal controls referred to in Annex VI in preventing or minimizing the risks to health and safety and the protection of fundamental rights posed by such systems, as well as the availability of sufficient capacities and resources among authorized bodies.

Article 44. Certificates

1.Certificates issued by notified bodies in accordance with Annex VII must be drawn up in an official EU language determined by the Member State where the notified body is established, or in an official EU language that the notified body can otherwise accept .

2.Certificates are valid for the period they indicate, which may not exceed five years. At the request of the provider, the validity of a certificate may be extended for additional periods, each not exceeding five years, based on a reassessment in accordance with the applicable conformity assessment procedures.

3. If an authorized body finds that an AI system no longer meets the requirements of Chapter 2 of this section, it shall, taking into account the principle of proportionality, suspend or revoke the certificate issued or impose any restrictions on it, unless compliance with these requirements is ensured by appropriate corrective measures taken by the provider of the system within an appropriate time limit set by the authorized body. The authorized body must justify its decision.

Article 45. Appeal against decisions made by authorized bodies

Member States shall ensure that an appeal procedure against decisions taken by the notified bodies is available to parties with a legitimate interest in that decision.

Article 46. Information obligations for authorized bodies

1. Authorized bodies notify the authorizing authority of the following:

a) all EU technical documentation assessment certificates, any supplements to these certificates, quality management system approvals issued in accordance with the requirements of Annex VII

(b) any refusal, restriction, suspension or withdrawal of an EU technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII;

(c) any circumstances affecting the scope or terms of notification;

d)any request for information that they have received from market surveillance authorities regarding conformity assessment activities

e) upon request, conformity assessment activities carried out within the framework of their notification and any other activity carried out, including cross-border activities and subcontractors.

2.Each authorized body must inform the other authorized bodies of:

(a)quality management system approvals which it has refused, suspended or withdrawn and, on request, of quality management system approvals which it has issued;

b) EU technical documentation assessment certificates or any additions thereto which it has refused, withdrawn, suspended or otherwise restricted, and upon request to the certificates and/or additions thereto it has issued.

3.Each notified body shall provide the other notified bodies performing similar conformity assessment activities covering the same artificial intelligence technologies with relevant information on issues related to negative and, upon request, positive conformity assessment results.

Article 47. Exemption from the conformity assessment procedure

1. By way of derogation from Article 43, any market surveillance authority may authorize the marketing or putting into service of specific high-risk AI systems on the territory of the Member State concerned for exceptional reasons of public safety or the protection of human life and health, environmental protection and the protection of important industrial and infrastructural assets . This authorization is valid for a limited period while the necessary conformity assessment procedures are carried out and will cease when these procedures have been completed. The completion of these procedures must be done without undue delay.

2. The person in subsection The permit referred to in 1 is only issued if the market surveillance authority concludes that the high-risk AI system meets the requirements in Chapter 2 of this section. The market surveillance authority shall notify the Commission and the other Member States of any authorization issued pursuant to subsection

3. If within 15 calendar days after receipt of those in par. the information referred to in 2 has not been objected to by either a Member State or the Commission in respect of an authorization issued by a market surveillance authority in a Member State in accordance with 1, this authorization is considered justified.

4.If a Member State within 15 calendar days of receipt of the The notification referred to in 2 objects to an authorization issued by a market surveillance authority in another Member State, or where the Commission considers the authorization to be contrary to EU law or the Member States' conclusion regarding the compliance of the system as referred to in paragraph 2 is unfounded, the Commission immediately initiates consultations with the relevant Member State; the operator(s) concerned must be heard and have the opportunity to present their views. In light of this, the Commission decides whether the approval is justified or not. The Commission addresses its decision to the Member State concerned and the relevant operator(s).

5. If the approval is considered unjustified, it will be withdrawn by the market surveillance authority of the Member State concerned.

6. Regardless of subsection 1 to 5, for high-risk AI systems intended to be used as security components in devices, or which are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, Article 59 of Regulation (EU) 2017/745 and Article 54 of Regulation (EU) 2017/746 also apply with regard to the exemption from the conformity assessment of compliance with the requirements of Chapter 2 of this section.

Article 48. EU Declaration of Conformity

1. The provider must prepare a written EU declaration of conformity for each AI system and keep it available to the national competent authorities for 10 years after the AI ​​system has been placed on the market or put into use. The EU declaration of conformity must identify the AI ​​system for which it is drawn up. A copy of the EU declaration of conformity is provided to the relevant national competent authorities upon request.

2. The EU declaration of conformity must state that the high-risk AI system in question meets the requirements of Chapter 2 of this section. The EU declaration of conformity shall contain the information in Annex V and shall be translated into one or more official EU languages ​​required by the Member State(s) where the high-risk AI system is made available.

3.If high-risk AI systems are subject to other EU harmonization legislation that also requires an EU declaration of conformity, a single EU declaration of conformity must be drawn up for all EU legislation applicable to the high-risk AI system. The declaration must contain all the information required to identify the EU harmonization legislation to which the declaration relates.

4. By preparing the EU declaration of conformity, the provider assumes responsibility for compliance with the requirements in chapter 2 of this section. The provider must keep the EU declaration of conformity up to date, if applicable.

5. The Commission is empowered to adopt delegated acts in accordance with Article 73 in order to update the content of the EU declaration of conformity in Annex V in order to introduce elements that become necessary in the light of technical developments.

Article 49. CE conformity marking

1. The CE marking must be affixed visibly, legibly and indelibly for high-risk AI systems. Where this is not possible or not justified due to the nature of the high-risk AI system, it must be placed on the packaging or on the accompanying documentation, as appropriate.

2. The CE marking referred to in subsection 1 of this article is subject to the general principles of Article 30 of Regulation (EC) No. 765/2008.

3. If applicable, the CE marking is followed by the identification number of the notified body responsible for the conformity assessment procedures in Article 43. The identification number must also be indicated in any advertising material that mentions that the high-risk AI system meets the requirements of the CE- marking.

Article 50. Storage of documents

The provider must, for a period of 10 years after the AI ​​system has been marketed or taken into use, make the following available to the national competent authorities:

a) the technical documentation referred to in Article 11

b) the documentation regarding the quality management system, cf. Article 17

c) the documentation relating to the changes approved by authorized bodies, where relevant

d) the decisions and other documents issued by the authorized bodies, where relevant

e) the EU declaration of conformity referred to in Article 48.

Article 51. Registration

Before marketing or putting into use a high-risk AI system as referred to in Article 6, paragraph

SECTION IV. TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS

Article 52. Transparency Obligations for Certain AI Systems

1.Providers must ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use. the obligation does not apply to AI systems authorized by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available to the public to report a criminal offence.

2. Users of an emotion recognition system or a biometric categorization system must inform the natural persons exposed to the system about the operation of the system. This obligation does not apply to AI systems used for biometric categorization and which are permitted by law to detect, prevent and investigate criminal offences.

3. Users of an artificial intelligence system that generates or manipulates image, sound or video content that is substantially similar to existing persons, objects, places or other entities or events, and which will falsely appear to a person as authentic or truthful ('deeply false'), must reveal that the content has been artificially generated or manipulated.

However, the first paragraph does not apply when the use is permitted by law to detect, prevent, investigate and prosecute criminal acts, or it is necessary for the exercise of the right to freedom of expression and the right to freedom of art and science guaranteed. in the EU Charter of Fundamental Rights and subject to appropriate guarantees for the rights and freedoms of third parties.

4 pcs. 1, 2 and 3 do not affect the requirements and obligations in section III of this regulation.

Section V. MEASURES TO SUPPORT INNOVATION

Article 53. AI regulatory sandboxes

1. AI regulatory sandboxes established by the competent authorities of one or more Member States or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited period, before they are placed on the market or put into use according to a specific plan. This shall take place under the direct supervision and guidance of the competent authorities in order to ensure compliance with the requirements of this Regulation and, where applicable, other EU and Member State legislation monitored in the sandbox.

2.Member States shall ensure that to the extent that the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory powers of other national authorities or competent authorities that provide or support access to data, the national data protection authorities and the other national authorities are associated with the operation of the AI ​​regulatory sandbox.

3. AI regulatory sandboxes must not affect the supervisory and corrective powers of competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of such systems shall result in immediate mitigation and, failing this, in the suspension of the development and testing process until such mitigation takes place .

4.Participants in the AI ​​Legal Sandbox remain liable under applicable EU and Member State liability laws for any damage caused to third parties as a result of the experiments taking place in the Sandbox.

5. The competent authorities of the Member States that have established AI regulatory sandboxes must coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Management Board and the Commission on the results of the implementation of these schemes, including good practices, experiences and recommendations on their construction and, where relevant, on the application of this Regulation and other EU legislation being monitored within the framework of the sandbox.

6. The modalities and conditions for the operation of AI regulatory sandboxes, including the eligibility criteria and the procedure for application, selection, participation and withdrawal from the sandbox, and the rights and obligations of the participants shall be determined by the implementation actions. These implementing acts are adopted following the examination procedure in Article 74, paragraph

Article 54. Further processing of personal data for the development of certain AI systems in the public interest in the AI ​​legislative sandbox

1. In the legal AI sandbox, personal data lawfully collected for other purposes must be processed for the purpose of developing and testing certain innovative AI systems in the sandbox under the following conditions:

a) the innovative AI systems must be developed to serve significant public interest in one or more of the following areas:

i) prevention, investigation, detection or prosecution of criminal offenses or enforcement of criminal sanctions, including protection against and prevention of threats to public safety, under the control and responsibility of the competent authorities. The processing must be based on Member State or EU law;

ii) public safety and public health, including disease prevention, control and treatment;

(iii) a high level of protection and improvement of environmental quality;

b) the processed data is necessary to meet one or more of the requirements of Title III, Chapter 2, where these requirements cannot be effectively met by processing anonymized, synthetic or other non-personal information

(c) there are effective monitoring mechanisms to identify whether high risks to data subjects' fundamental rights may arise during the sandbox experiment, as well as a response mechanism to promptly mitigate those risks and, where necessary, stop processing;

(d) all personal data to be processed in connection with the sandbox is in a functionally separate, isolated and protected data processing environment under the control of the participants and only authorized persons have access to that data;

(e) any processed personal data is not transmitted, transmitted or otherwise accessed by other parties;

(f) any processing of personal data in connection with the sandbox does not lead to measures or decisions affecting data subjects;

(g) all personal data processed in connection with the sandbox will be deleted when participation in the sandbox has ended or the personal data has reached the end of its retention period;

(h) the logs of the processing of personal data in connection with the sandbox are kept for the duration of the participation in the sandbox and 1 year after its termination, solely for the purpose and only as long as necessary to fulfill the accountability and documentation obligations under this article or other application EU or Member State law;

(i) a full and detailed description of the process and rationale behind the training, testing and validation of the AI ​​system is kept together with the test results as part of the technical documentation in Annex IV

j) a brief summary of the AI ​​project developed in the sandbox, its objectives and expected results published on the website of the competent authorities.

2 pcs. 1 does not affect EU or Member State legislation that excludes processing for purposes other than those expressly mentioned in the legislation in question.

Article 55. Measures for small providers and users

1. Member States shall take the following measures:

(Video) OSOR-presentation of the Interoperable Europe Act and the Reuse of Software in Public Administration

a) give small providers and start-ups priority access to the AI ​​legislative sandboxes to the extent that they meet the eligibility conditions

b) organize specific awareness-raising activities on the application of this Regulation tailored to the needs of the smaller providers and users

c)where appropriate, establish a dedicated channel for communication with small providers and users and other innovators to provide guidance and answer queries on the implementation of this Regulation.

2. The specific interests and needs of small providers shall be taken into account when determining the fees for conformity assessment pursuant to Article 43, these fees being reduced in proportion to their size and market size.

SECTION VI. CONTROL

Kapitel 1. European Artificial Intelligence Board

Article 56. Establishment of the European Artificial Intelligence Board

1. A 'European Artificial Intelligence Board' ('Board') is established.

2. The board provides advice and assistance to the Commission with a view to:

(a)contribute to effective cooperation between the national regulatory authorities and the Commission with regard to matters covered by this Regulation

(b)coordinating and contributing to guidance and analysis from the Commission and the national regulatory authorities and other competent authorities on emerging issues across the internal market in respect of matters covered by this Regulation;

(c)assist the national regulatory authorities and the Commission in ensuring the consistent application of this Regulation.

§ 57. Structure of the board

1. The board consists of the national supervisory authorities, which must be represented by the manager or an equivalent senior official of this authority, and the European Data Protection Supervisor. Other national authorities may be invited to the meetings where the issues discussed are relevant to them.

2. The board adopts its rules of procedure by a simple majority among its members after approval by the Commission. The rules of procedure shall also contain the operational aspects related to the performance of the Board's duties as stated in Article 58. The Board may set up sub-groups as necessary to examine specific issues.

3. The chairmanship of the board is handled by the Commission. The Commission convenes the meetings and prepares the agenda in accordance with the board's tasks under this regulation and its rules of procedure. The Commission shall provide administrative and analytical support to the Board's activities pursuant to this Regulation.

4. The board may invite external experts and observers to participate in its meetings and may hold exchanges with interested third parties to inform about its activities to an appropriate extent. To this end, the Commission may facilitate exchanges between the Board and other EU bodies, offices, agencies and advisory groups.

Section 58. Tasks of the board

When the board provides advice and assistance to the Commission in connection with Article 56, subsection 2, it must in particular:

a) gather and share expertise and best practices among Member States

(b)contribute to uniform administrative practices in the Member States, including to the operation of regulatory sandboxes as referred to in Article 53

c) make opinions, recommendations or written contributions on issues related to the implementation of this regulation, in particular

i) on technical specifications or existing standards relating to the requirements of Title III, Chapter 2,

ii) on the use of harmonized standards or common specifications as referred to in Articles 40 and 41

iii) on the preparation of guidance documents, including the guidelines for imposing administrative fines as referred to in Article 71.

Chapter 2. NATIONAL COMPETENT AUTHORITIES

Article 59. Designation of national competent authorities

1. National competent authorities are established or designated by each Member State for the purpose of ensuring the application and implementation of this Regulation. The national competent authorities must be organized in such a way as to ensure the objectivity and impartiality of their activities and tasks.

2.Each Member State appoints a national supervisory authority from among the national competent authorities. The national regulatory authority acts as the notification authority and the market surveillance authority, unless a Member State has organizational and administrative reasons to designate more than one authority.

3.Member States shall notify the Commission of their designation or designations and, where relevant, the reasons for designating more than one authority.

4.Member States shall ensure that the national competent authorities are provided with sufficient financial and human resources to carry out their tasks under this Regulation. In particular, national competent authorities must have a sufficient number of staff permanently available whose skills and expertise must include an in-depth understanding of artificial intelligence technologies, data and data processing, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements .

5. The Member States submit an annual report to the Commission on the status of the national competent authorities' financial and human resources with an assessment of their adequacy. The commission sends this information to the board for discussion and possible recommendations.

6. The Commission facilitates the exchange of experience between national competent authorities.

7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small providers. Where the national competent authorities intend to provide guidance and advice regarding an AI system in areas covered by other EU legislation, the competent authorities national authorities shall be consulted under that EU legislation where appropriate. Member States may also establish one central contact point for communication with operators.

8. Where EU institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor acts as the competent authority for their supervision.

SECTION VII. EU DATABASE FOR HIGH RISK STANDING AI SYSTEMS

Article 60. EU database for autonomous high-risk AI systems

1. The Commission creates and maintains, in cooperation with the Member States, an EU database that contains information as referred to in subsection 2 on high-risk AI systems as referred to in Article 6, subsection 2, and which is registered in accordance with Article 51.

2.The data listed in Annex VIII must be entered into the EU database by the providers. The Commission provides them with technical and administrative support.

3. Information in the EU database must be available to the public.

4. The EU database must only contain personal data to the extent that it is necessary for the collection and processing of information in accordance with this regulation. This information must include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider.

5. The Commission is the data controller for the EU database. It must also ensure adequate technical and administrative support for the providers.

SECTION VIII. POST-MARKET MONITORING, INFORMATION SHARING, MARKET MONITORING

Chapter 1. POST-MARKET SURVEILLANCE

Article 61. Providers' post-market surveillance and post-market surveillance plan for high-risk AI systems

1.Providers must establish and document a post-market surveillance system in a manner commensurate with the nature of artificial intelligence technologies and the risks of the high-risk AI system.

2. The post-market surveillance system shall actively and systematically collect, document and analyze relevant data provided by users or collected through other sources on the performance of high-risk AI systems throughout their lifetime and allow the provider to evaluate the continuous compliance of AI systems with the requirements specified in Section III, Chapter 2.

3. The post-marketing surveillance system must be based on a post-marketing surveillance plan. The post-marketing surveillance plan must be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act with detailed provisions on a template for the post-market surveillance plan and the list of elements to be included in the plan.

4.For high-risk AI systems covered by the acts referred to in Annex II, where a post-market surveillance system and plan has already been established under the relevant legislation, the elements described shall in paragraph 1, 2 and 3, integrate into system and plan as needed.

The first paragraph also applies to high-risk AI systems as referred to in point 5, letter b) of Annex III, which are marketed or taken into use by credit institutions regulated by Directive 2013/36/EU.

Chapter 2. SHARING INFORMATION ABOUT EVENTS AND ERRORS

Sharing information about incidents and malfunctions

Article 62. Reporting of serious incidents and malfunctions

1. Providers of high-risk AI systems placed on the EU market must report any serious incident or any malfunction of those systems which constitutes a breach of obligations under EU law aimed at protecting fundamental rights, to the market surveillance authorities of the Member States where the incident or breach occurred.

Such notice shall be provided promptly after the Provider establishes a causal link between the AI ​​System and the event or malfunction or the reasonable likelihood of such a link, and in any event no later than 15 days after the Providers become aware of the serious incident or malfunction.

2. When the market surveillance authority receives a notification of a breach of obligations under EU law for the protection of fundamental rights, the market surveillance authority notifies the national public authorities or bodies referred to in Article 64, paragraph with the obligations in paragraph 1. This guidance is issued no later than 12 months after this regulation comes into force.

3.For high-risk AI systems as referred to in point 5(b) of Annex III, which are marketed or taken into use by providers that are credit institutions regulated by Directive 2013/36/EU, and for high-risk AI systems that are safety components of devices or devices themselves are covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, the notification of serious incidents or malfunctions must be limited to those that constitute a breach of obligations under EU law aims to protect fundamental rights.

Chapter 3. ENFORCEMENT

Article 63. Market surveillance and control of artificial intelligence systems on the EU market

1. Regulation (EU) 2019/1020 applies to AI systems covered by this regulation. For the purpose of effective enforcement of this Regulation:

a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Title III, Chapter 3 of this Regulation

b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.

2. The national regulatory authority shall regularly report to the Commission on the results of relevant market surveillance activities. The national supervisory authority shall promptly report to the Commission and the relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of EU competition law.

3.For high-risk AI systems related to products to which legal acts listed in Annex II, Section A apply, the market surveillance authority for the purposes of this Regulation is the authority responsible for market surveillance activities designated in pursuant to these legal acts.

4.For AI systems that are marketed, deployed or used by financial institutions regulated by EU financial services legislation, the market surveillance authority for the purposes of this Regulation is the relevant authority responsible for the financial supervision of these institutions according to that legislation.

5.For AI systems listed in point 1(a), to the extent that the systems are used for law enforcement purposes, points 6 and 7 of Annex III, Member States designate as market surveillance authorities for the purposes of this Regulation either the competent data protection authorities supervisory authorities under Directive ( EU) 2016/680 or Regulation 2016/679 or the national competent authorities that supervise the activities of the law enforcement, immigration or asylum authorities that adopt or use these systems.

6.If EU institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority.

7.Member States shall facilitate coordination between market surveillance authorities designated under this Regulation and other relevant national authorities or bodies overseeing the application of EU harmonization legislation listed in Annex II or other EU legislation that may be relevant for high-risk AI systems mentioned in Annex III.

Article 64. Access to data and documentation

1. Access to data and documentation related to their activities, the market surveillance authorities shall have full access to the training, validation and test datasets used by the provider, including through application programming interfaces ("API") or other appropriate technical means and tools that enable remote access.

2.If necessary to assess the compliance of the high-risk AI system with the requirements of Title III, Chapter 2, and upon a reasoned request, the market surveillance authorities shall have access to the source code of the AI ​​system.

3. National public authorities or bodies supervising or enforcing compliance with obligations under Union law for the protection of fundamental rights in relation to the use of high-risk AI systems as referred to in Annex III shall be empowered to request and obtain access to any records created or maintained pursuant to this Regulation when access to those records is necessary to carry out the powers under their mandate within the limits of their jurisdiction. The relevant public authority or public body shall notify the market surveillance authority of the Member State concerned of any such request.

4. No later than 3 months after the entry into force of this regulation, each Member State shall identify the public authorities or bodies referred to in paragraph 3, and make a list publicly available on the website of the national regulatory authority. Member States shall notify the Commission and all other Member States of the list and shall keep the list up to date.

5. If the in subsection the documentation referred to in 3 is insufficient to establish whether there has been a breach of obligations under EU law with a view to protecting fundamental rights, the public authority or the body referred to in subsection 3, make a reasoned request to the market surveillance authority to organize testing of the high-risk AI system using technical means. The market surveillance authority organizes the test with the close involvement of the requesting public authority or body within a reasonable time after the request.

6. Any information and documentation obtained by the national public authorities or bodies referred to in paragraph 3, pursuant to the provisions of this Article, shall be treated in accordance with the confidentiality obligations of Article 70.

Article 65. Procedure for handling AI systems that pose a risk at national level

1. AI systems that pose a risk shall be understood as a product that poses a risk as defined in Article 3, point 19, of Regulation (EU) 2019/1020, as regards risks to health or safety or to the protection of the fundamental rights of persons.

2. If the market surveillance authority in a Member State has sufficient reasons to believe that an AI system poses a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI ​​system in question with regard to its compliance with all the requirements and obligations laid down in this Regulation. When there are risks to the protection of fundamental rights, the market surveillance authority shall also notify the relevant national public authorities or bodies referred to in Article 64(1). 3. The relevant operators cooperate, if necessary, with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64, paragraph

If, in the course of this evaluation, the market surveillance authority finds that the AI ​​system does not comply with the requirements and obligations of this Regulation, it shall without delay require the relevant operator to take all appropriate corrective measures to bring the AI ​​system into compliance, withdraw the AI - system back from the market or to recall it within a reasonable period that is in reasonable proportion to the nature of the risk, as it may prescribe.

The market surveillance authority shall inform the relevant authorized body accordingly. Article 18 of Regulation (EU) 2019/1020 applies to the measures referred to in the second paragraph.

3.If the market surveillance authority finds that non-compliance is not limited to its national territory, it shall inform the Commission and the other Member States of the results of the evaluation and of the measures it has ordered the operator to take.

4. The operator shall ensure that all appropriate corrective measures are taken in respect of all affected AI systems that it has made available on the market throughout the Union.

5. If the operator of an AI system does not take appropriate corrective measures within the period mentioned in paragraph 2, the market surveillance authority shall take all appropriate interim measures to prohibit or restrict the AI ​​system from being made available on its national market, to withdraw the product from that market or to recall it. That authority shall immediately inform the Commission and the other Member States of these measures.

6. The information referred to in subsection 5 shall include all available details, in particular the data necessary to identify the non-compliant AI system, the origin of the AI ​​system, the nature of the alleged non-compliance and the risk involved. , the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, market surveillance authorities must indicate whether the non-compliance is due to one or more of the following:

a) The failure of the AI ​​system to meet the requirements in section III, chapter 2

(b)absence of the harmonized standards or common specifications referred to in Articles 40 and 41 giving a presumption of conformity.

7. Market surveillance authorities of Member States other than the market surveillance authority of the Member State initiating the procedure shall immediately inform the Commission and the other Member States of any measures taken and of any additional information at their disposal regarding non- - compliance with the AI ​​concerned system and, in case of disagreement with the notified national measure, their objections.

8.If neither a Member State nor the Commission within three months of receiving the 5 has objected to a provisional measure taken by a Member State, this measure is considered justified. This is without prejudice to the procedural rights of the operator concerned in accordance with Article 18 of Regulation (EU) 2019/1020.

9. The market surveillance authorities of all Member States shall ensure that appropriate restrictive measures are taken immediately in respect of the product concerned, such as withdrawal of the product from their market.

Article 66. Union protection procedure

1. If a Member State within three months of receiving the notification referred to in Article 65, paragraph , the Commission shall immediately initiate consultations with the relevant operator(s) and evaluate the national measure. On the basis of the results of this evaluation, the Commission decides whether the national measure is justified or not, no later than 9 months after the notification referred to in Article 65(1). 5, and informs the Member State concerned of such a decision.

2.If the national measure is considered justified, all Member States shall take the necessary measures to ensure that the non-compliant AI system is withdrawn from their market and shall notify the Commission thereof. If the national measure is considered unjustified, the Member State concerned withdraws the measure.

3.If the national measure is considered justified and non-compliance with the AI ​​system is attributed to deficiencies in the harmonized standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure in Article 4, paragraph 11, in Regulation (EU) No. 1025/2012.

Article 67. Compliant AI systems that pose a risk

1. If the market surveillance authority of a Member State, after carrying out an evaluation in accordance with Article 65, finds that, although an AI system complies with this Regulation, it poses a risk to the health or safety of persons, for the compliance with obligations under EU law or national law aimed at protecting fundamental rights or other aspects of the protection of the public interest, the relevant operator must take all appropriate measures to ensure that the AI ​​system in question, when it is placed on the market or put into use, no longer presents this risk, to withdraw the AI ​​system from the market or to recall it within a reasonable period commensurate with the nature of the risk as it may prescribe.

2.The provider or other relevant operators shall ensure that corrective action is taken in respect of all the affected AI systems that they have made available on the market throughout the Union within the time limit prescribed by the market surveillance authority of the Member State which is mentioned in paragraph 1.

3. The Member State immediately informs the Commission and the other Member States thereof. This information shall include all available details, in particular the data necessary to identify the AI ​​system in question, the origin and supply chain of the AI ​​system, the nature of the risk involved and the nature and duration of the national measures that is taken.

4. The Commission immediately initiates consultations with the Member States and the relevant operator and assesses the national measures taken. Based on the results of this evaluation, the Commission decides whether the measure is justified or not and, if necessary, proposes appropriate measures.

5. The Commission addresses its decision to the Member States.

Article 68. Formal non-compliance

1. If the market surveillance authority of a Member State makes one of the following conclusions, it shall require the relevant provider to put an end to the non-compliance in question:

a) the conformity marking has been affixed in breach of Article 49

b) the conformity marking has not been affixed;

c) the EU declaration of conformity has not been drawn up;

d) the EU declaration of conformity is not drawn up correctly;

e) the identification number of the notified body involved in the conformity assessment procedure, where applicable, has not been placed

2. If the in subsection the non-compliance referred to in paragraph 1 continues, the Member State concerned shall take all appropriate measures to restrict or prohibit the high-risk AI system from being made available on the market or ensure that it is recalled or withdrawn from the market. .

SECTION IX. CODE OF CONDUCT

Article 69. Codes of conduct

1. The Commission and the Member States shall encourage and facilitate the development of codes of conduct to promote the voluntary application to AI systems, other than high-risk AI systems, of the requirements of Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.

2. The commission and the board must encourage and facilitate the preparation of codes of conduct that will promote the voluntary application to AI systems of requirements related to e.g. environmental sustainability, accessibility for people with disabilities, stakeholder participation in the design and development of the AI ​​systems and the diversity of development teams on the basis of clear objectives and key performance indicators to measure the achievement of these objectives.

3. Codes of conduct may be prepared by individual providers of AI systems or by organizations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organizations. Codes of conduct may cover one or more AI systems, taking into account the similarity of the intended purpose of the relevant systems.

4. The Commission and the Board shall take into account the specific interests and needs of small providers and start-ups when encouraging and facilitating the development of codes of conduct.

SECTION X. CONFIDENTIALITY AND PENALTIES

Article 70. Confidentiality

1. National competent authorities and authorized bodies involved in the application of this Regulation shall respect the confidentiality of information and data obtained during the performance of their tasks and activities in such a way as to protect , especially:

a) intellectual property rights and confidential business information or trade secrets belonging to a natural or legal person, including source code, except for the cases referred to in Article 5 of Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their illegal acquisition , use and disclosure apply.

b) the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits c) public and national security interests;

c) the integrity of criminal proceedings or administrative proceedings.

2. Without prejudice to subsection 1, information exchanged on a confidential basis between national competent authorities and between national competent authorities and the Commission may not be disclosed without prior consultation of the original national competent authority and the user, when high-risk AI systems referred to in points 1, 6 and 7 in Annex III is used by law enforcement, immigration or asylum authorities when such disclosure would endanger public and national security interests.

Where the law enforcement, immigration or asylum authorities are providers of high-risk AI systems as referred to in points 1, 6 and 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. These authorities ensure that the market surveillance authorities referred to in Article 63, paragraph 5 and 6, upon request can immediately access the documentation or obtain a copy thereof. Only Market Surveillance Authority personnel holding the appropriate level of security clearance should have access to this documentation or any copy thereof.

3 pieces. 1 and 2 do not affect the rights and obligations of the Commission, the Member States and the authorized bodies with regard to the exchange of information and dissemination of warnings or the obligations of the parties concerned to provide information according to the criminal law of Member States.

4. The Commission and the Member States may, if necessary, exchange confidential information with regulatory authorities in third countries with which they have concluded bilateral or multilateral confidentiality agreements guaranteeing an appropriate level of confidentiality.

Article 71. Sanctions

1.In accordance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on sanctions, including administrative fines, for infringements of this Regulation and shall take all necessary measures to ensure that they are properly and effectively implemented. . The established sanctions must be effective, proportionate and dissuasive. They take into account in particular the interests of small providers and start-ups and their financial viability.

2. Member States shall notify the Commission of these rules and measures and shall notify it without delay of any subsequent amendment affecting them.

3. The following infringements are subject to administrative fines of up to EUR 30 000 000 or, if the offender is a company, up to 6% of its total annual worldwide turnover for the previous financial year, whichever is higher:

a) failure to comply with the prohibition of artificial intelligence as referred to in Article 5

b) Non-compliance of the AI ​​system with the requirements of Article 10.

4. Failure to comply with the AI ​​system with requirements or obligations under this Regulation other than those set out in Articles 5 and 10 shall be subject to administrative fines of up to EUR 20 000 000 or, if the offender is an undertaking, up to 4% of its total worldwide annual revenue for the previous financial year, whichever is higher.

5. Providing incorrect, incomplete or misleading information to authorized bodies and national competent authorities in response to a request is subject to administrative fines of up to EUR 10,000,000 or, if the offender is a company, up to 2% of its total worldwide annual turnover for the previous financial year, whichever is higher.

6. When determining the amount of the administrative fine in each case, all relevant circumstances of the specific situation must be taken into account, and due consideration must be given to the following:

a) the nature, seriousness and duration of the violation and its consequences

b) whether administrative fines have already been imposed by other market surveillance authorities on the same operator for the same infringement.

c) the size and market share of the operator who committed the infringement

7.Each Member State lays down rules on whether and to what extent administrative fines can be imposed on public authorities and bodies established in the Member State in question.

8. Depending on the legal system of the Member States, the rules on administrative fines may be applied in such a way that the fines are imposed by competent national courts in other bodies which are in force in those Member States. The application of such rules in these Member States shall have a corresponding effect .

Article 72. Administrative fines on EU institutions, agencies and bodies

1. The European Data Protection Supervisor may impose administrative fines on EU institutions, agencies and bodies that fall within the scope of this Regulation. When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case, all relevant circumstances must be taken into account. the specific situation is taken into account and due consideration shall be given to the following:

a) the nature, seriousness and duration of the violation and its consequences

b) the cooperation with the European Data Protection Supervisor with a view to remedying the breach and mitigating the possible negative effects of the breach, including compliance with any measures previously ordered by the European Data Protection Supervisor against the Union institution or agency or body employed say with the same subject;

(c) any similar previous infringement by the EU institution, agency or body

2. The following infringements are subject to administrative fines of up to EUR 500,000:

a) failure to comply with the prohibition of artificial intelligence as referred to in Article 5

b) Non-compliance of the AI ​​system with the requirements of Article 10.

3. Non-compliance by the AI ​​system with requirements or obligations under this Regulation other than those set out in Articles 5 and 10 shall be subject to administrative fines of up to EUR 250 000.

4. Before the European Data Protection Supervisor takes decisions under this Article, the EU institution, agency or body which is the subject of the proceedings conducted by the European Data Protection Supervisor shall give the European Data Protection Supervisor the opportunity to be heard on the proceedings concerning the possible violation. The European Data Protection Supervisor only bases its decisions on elements and circumstances on which the parties concerned have been able to comment. Complaints, if any, must be closely related to the case.

5. The affected parties' right to defense must be fully respected during the case. They have the right to access the files of the European Data Protection Supervisor subject to the legitimate interest of individuals or companies in the protection of their personal data or business secrets.

6. Funds collected by the imposition of fines in this article are the revenue from the general budget of the Union.

SECTION XI. DELEGATION OF POWER AND COMMITTEE PROCEDURE

Article 73. Exercise of the delegation

1. The power to adopt delegated acts is conferred on the Commission under the conditions laid down in this Article.

2. The delegation of powers referred to in Article 4, Article 7, paragraph 1, Article 11, subsection 3, Article 43, subsection 5 and 6, and Article 48, subsection 5, is assigned to the Commission for an indefinite period of time from [the entry into force of the regulation].

3. The delegation of powers referred to in Article 4, Article 7, paragraph 1, Article 11, subsection 3, Article 43, subsection 5 and 6, and Article 48, subsection 5, may be revoked at any time by the European Parliament or the Council. A decision of revocation puts an end to the delegation of powers set out in that decision. It shall take effect on the day following its publication in the Official Journal of the European Union or at a later date specified therein. It does not affect the validity of delegated acts already in force.

4. As soon as the Commission has adopted a delegated act, it simultaneously notifies the European Parliament and the Council thereof.

5. Any delegated act adopted pursuant to Article 4, Article 7, paragraph 1, Article 11, subsection 3, Article 43, subsection 5 and 6, and Article 48, subsection 5, will only come into force if no objection has been made per either the European Parliament or the Council within a period of three months following notification of the act in question to the European Parliament and the Council, or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. This period is extended by three months at the initiative of the European Parliament or the Council.

Article 74. Committee procedure

1. The Commission is assisted by a committee. This committee must be a committee in accordance with Regulation (EU) No. 182/2011.

2. If reference is made to this paragraph, Article 5 of Regulation (EU) No. 182/2011 shall apply.

SECTION XII. FINAL PROVISIONS

Article 75. Amendment of Regulation (EC) No. 300/2008

In Article 4, subsection 3, in Regulation (EC) No. 300/2008, the following section is added:

"When adopting detailed measures regarding technical specifications and procedures for the approval and use of safety equipment relating to artificial intelligence systems within the meaning of Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence] in Chapter 2, Section III of said regulation, must be taken into account."

Article 76. Amendment of Regulation (EU) No. 167/2013

In Article 17, subsection 5, in Regulation (EU) No. 167/2013, the following section is added:

"When adopting delegated acts pursuant to the first paragraph concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence], the requirements of Title III shall be taken into account to chapter 2 of said regulation."

Article 77. Amendment of Regulation (EU) No. 168/2013

In Article 22, subsection 5, in Regulation (EU) No. 168/2013, the following section is added:

"When adopting delegated acts under the first title concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX on [artificial intelligence] of the European Parliament and of the Council, the requirements of title III shall, account is taken of chapter 2 of the said regulation."

Article 78. Amendment of Directive 2014/90/EU

In Article 8 of Directive 2014/90/EU, the following paragraph is added:

"4. For artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence], when they carry out their activities in accordance with paragraph 1 and when they adopt technical specifications and test standards in 2 and 3, the Commission shall take into account the requirements of Title III, Chapter 2 of the said Regulation."

Article 79. Amendment of Directive (EU) 2016/797

In Article 5 of Directive (EU) 2016/797, the following paragraph is added:

"12. When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to paragraph 11 concerning artificial intelligence systems which are security components pursuant to Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence ] in section III, chapter 2 of the said regulation, account must be taken of."

Article 80. Amendment of Regulation (EU) 2018/858

In Article 5 of Regulation (EU) 2018/858, the following paragraph is added:

"4. When adopting delegated acts pursuant to paragraph 3 relating to artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence], the requirements of Title III shall, chapter 2 of said regulation is taken into account.

Article 81. Amendment of Regulation (EU) 2018/1139

Regulation (EU) 2018/1139 is amended as follows:

1) In Article 17, the following paragraph is added:

"3. Without prejudice to paragraph 2, in the adoption of implementing acts pursuant to paragraph 1 concerning artificial intelligence systems which are security components pursuant to Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence] in section III, chapter 2 of the said regulation, must be taken into account."

2) In Article 19, the following paragraph is added:

"4. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX [on artificial intelligence], the requirements of Title III, Chapter 2, in said regulation be taken into account."

3) In Article 43, the following paragraph is added:

"4. When adopting implementing acts pursuant to paragraph 1 concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX [on artificial intelligence], the requirements of Title III, Chapter 2 of said Regulation shall be taken into consideration. account."

4) In Article 47, the following paragraph is added:

"3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX [on artificial intelligence], the requirements of Title III, Chapter 2, in said regulation be taken into account."

5) In Article 57, the following paragraph is added:

"When adopting the implementing acts concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX [on artificial intelligence], the requirements of Title III, Chapter 2 of the said Regulation shall be taken into account."

6) In Article 58, the following paragraph is added:

"3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX [on artificial intelligence], the requirements of Title III, Chapter 2 of said regulation be taken into account.".

Article 82. Amendment of Regulation (EU) 2019/2144

In Article 11 of Regulation (EU) 2019/2144, the following paragraph is added:

"3. When adopting implementing acts pursuant to paragraph 2 regarding artificial intelligence systems which are security components within the meaning of Regulation (EU) YYY/XX of the European Parliament and of the Council [on artificial intelligence], the requirements of Title III shall account is taken of Chapter 2 of the said Regulation."

Article 83. AI systems already marketed or put into use

1. This Regulation shall not apply to AI systems that are components of the major IT systems established by the acts listed in Annex IX and which have been placed on the market or put into use before [12 months after the date of application of this regulation referred to in Article 85, paragraph 2], unless the replacement or amendment of these legal acts leads to a significant change in the design or intended purpose of the AI ​​system(s) in question.

The requirements set out in this Regulation shall, where relevant, be taken into account in the evaluation of each major IT system established by the legal acts listed in Annex IX, which shall be implemented as laid down in the respective legal acts.

2. This regulation applies to high-risk AI systems, except for those referred to in paragraph )], only if, from that date, these systems are subject to significant changes in their design or intended purpose.

Article 84. Evaluation and audit

1. The Commission assesses the need to change the list in Annex III once a year after this regulation enters into force.

2. No later than [three years after the date of application of this regulation as referred to in Article 85, paragraph 2] and thereafter every four years, the Commission shall present a report on the evaluation and revision of this Regulation to the European Parliament and the Council. The reports must be published.

(Video) #FOSSBack: Axel Thévenet – The Role of Open Source for an Interoperable Europe

3. The reports referred to in subsection 2, must pay particular attention to the following:

(a) the status of the financial and human resources of the national competent authorities in order to effectively carry out the tasks assigned to them under this Regulation;

b) the state of sanctions and in particular administrative fines as referred to in Article 71, paragraph 1, which the Member States apply for infringements of the provisions of this Regulation.

4.Within [three years after the date of application of this regulation as referred to in Article 85, paragraph 2] and every four years thereafter, the Commission evaluates the impact and effectiveness of codes of conduct to promote the application of the requirements of Title III, Chapter 2 and any other additional requirements for AI systems other than high-risk AI systems.

5. For the purposes of subsection 1-4, the Board, the Member States and the national competent authorities shall provide the Commission with information on its request.

6. When implementing those in subsection In the evaluations and revisions mentioned in 1-4, the Commission takes into account the positions and results of the board, the European Parliament, the Council and other relevant bodies or sources.

7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account technological developments and in the light of progress in the information society.

Article 85. Entry into force and application

1. This regulation enters into force on the twentieth day after publication in the Official Journal of the European Union.

2. This regulation applies from [24 months after the regulation comes into force].

3. Regardless of subsection 2:

(a)Title III, Chapter 4 and Title VI shall apply from [three months after the entry into force of this Regulation];

(b)Article 71 shall apply from [twelve months after the entry into force of this Regulation].

This Regulation shall be binding in its entirety and directly applicable in each Member State.

Done in Brussels,

For the European Parliament For the Council

The President The President

LEGISLATIVE ACCOUNTING – Seeher

APPENDIX

to the proposal for a regulation of the European Parliament and of the Council

ESTABLISHING HARMONIZED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATION

APPENDIX. ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES as referred to in Article 3, paragraph

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide range of methods, including deep learning

(b)Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems

(c) Statistical approaches, Bayesian estimation, search and optimization methods.

APPENDIX II. LIST OF UNION HARMONIZATION LEGISLATION

Section A – List of EU harmonization legislation based on the new legislative framework

1. Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery and amending Directive 95/16/EC (OJ L 157 of 9.6.2006, p. 24) [as repealed by the Machinery Regulation] ;

2. Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170 of 30.6.2009, p. 1);

3. Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and on the repeal of Directive 94/25/EC (OJ L 354 of 28.12.2013, p. 90);

4. Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonization of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96 of 29.3.2014, p. 251) );

5. Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonization of Member States' legislation on equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96 of 29.3. 2014, p. 309);

6. Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonization of Member States' legislation on the making available on the market of radio equipment and on the repeal of Directive 1999/5/EC (OJ L 153 of 22.5.2014, p. 62);

7. Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonization of Member States' legislation on the making available on the market of pressure equipment (OJ L 189 of 27.6.2014, p. 164);

8. Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cable car installations and on the repeal of Directive 2000/9/EC (OJ L 81 of 31.3.2016, p. 1);

9. Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81 of 31.3.2016, p. 51);

10. Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuel and repealing Directive 2009/142/EC (OJ L 81 of 31.3.2016, p. 99);

11. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No. 178/2002 and Regulation (EC) No. 1223 /2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117 of 5.5.2017, p. 1;

12. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on medical devices for in vitro diagnostics and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117 of 5/5/2017, p. 176).

Section B. List of other EU harmonization legislation

1. Regulation (EC) No. 300/2008 of the European Parliament and the Council of 11 March 2008 on common rules in civil aviation security and repealing Regulation (EC) No. 2320/2002 (OJ L 97 of 9.4.2008) , p. 72).

2. Regulation (EU) No. 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheeled vehicles and quadricycles (OJ L 60 of 2.3.2013, p. 52) );

3. Regulation (EU) No. 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60 of 2.3.2013, p. 1);

4. Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on equipment for ships and on the repeal of Council Directive 96/98/EC (OJ L 257 of 28.8.2014, p. 146);

5. Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system in the European Union (OJ L 138 of 26.5.2016, p. 44).

6. Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) no. 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151 of 14.6.2018, p. 1); 3. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, as well as systems, components and separate technical units intended for such vehicles, as regards their general safety and protection of passengers in vehicles and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulation (EC) No. 78/2009, (EC) No. 79/2009 and (EC) No. 661/ 2009 of the European Parliament and the Council and Commission Regulation (EC) No. 631/2009, (EU) No. 406/2010, (EU) No. 672/2010, (EU) No. 1003/2010, (EU) No. . (EU) No. 65/2012, (EU) No. 130/2012, (EU) No. 347/2012, (EU) No. 351/2012, (EU) No. 1230/2012 and (EU) 2015/ 166 (OJ L 325 of 16.12.2019, p. 1);

7. Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in civil aviation and on the establishment of an EU aviation safety agency and on amending Regulation (EC) No. 2111/2005, (EC ) No. 1008/2008, (EU) No. 996/2010, (EU) No. 376/2014 and European Parliament and Council Directives 2014/30/EU and 2014/53/EU and repealing Regulations (EC) ) No. 552/2004 and (EC) No. 216/2008 of the European Parliament and the Council and Council Regulation (EEC) No. 3922/91 (OJ L 212 of 22.8.2018, p. 1), insofar as design, production and marketing of aircraft as referred to in Article 2, subsection 1, letter a) and b), when it concerns unmanned aerial vehicles and their engines, propellers, parts and equipment for their remote control. .

APPENDIX III. HIGH RISK AI SYSTEMS REFERRED TO IN ARTICLE 6, PARAGRAPH

High-risk AI systems according to Article 6, paragraph 2, are the AI ​​systems listed in one of the following areas:

1. Biometric identification and categorization of natural persons:

a)AI systems intended to be used for "real-time" and "post" remote identification of natural persons

2. Management and operation of critical infrastructure:

a)AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heat and electricity.

3.Education and vocational training:

a)AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions

b)AI systems intended to be used to assess students in educational and vocational training institutions and to assess participants in tests commonly required for admission to educational institutions.

4. Employment, worker management and access to self-employment:

a)AI systems intended to be used for the recruitment or selection of natural persons, in particular for advertising vacancies, screening or filtering applications, evaluating candidates during interviews or tests

(b)AI intended to be used to make decisions about promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating the performance and behavior of persons in such relationships.

5. Access to and enjoyment of essential private and public services and benefits:

a) AI systems intended to be used by public authorities or on behalf of public authorities to assess the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, withdraw or reclaim such benefits and services

b) AI systems intended to be used to assess the creditworthiness of natural persons or to determine their creditworthiness, with the exception of AI systems deployed by small providers for their own use

c) AI systems intended to be used to dispatch or prioritize the dispatch of first responder services, including firefighters and medical assistance.

6. Law enforcement:

a) AI systems intended to be used by law enforcement authorities to carry out individual risk assessments of natural persons in order to assess a natural person's risk of committing offenses or recidivism or the risk of potential victims of criminal offences;

(b)AI systems intended to be used by law enforcement agencies such as polygraphs and similar tools or to detect the emotional state of a natural person;

(c) AI systems intended to be used by law enforcement authorities to detect deep counterfeiting as referred to in Article 52(1);

d) AI systems intended to be used by law enforcement agencies to assess the reliability of evidence in connection with the investigation or prosecution of criminal offences;

e) AI systems intended to be used by law enforcement authorities to predict the occurrence or repetition of an actual or potential criminal act based on the profiling of natural persons as referred to in Article 3, paragraph 4, of Directive (EU) 2016/680 or assessment of personality traits and characteristics or previous criminal behavior of natural persons or groups;

f)AI systems intended to be used by law enforcement authorities for the profiling of natural persons as referred to in Article 3, paragraph 4, of Directive (EU) 2016/680 in connection with the detection, investigation or prosecution of criminal offenses

(g) AI systems intended to be used for crime analysis relating to natural persons that enable law enforcement agencies to search complex related and unrelated large data sets available in different data sources or in different data formats to identify unknown patterns or discover hidden relationships in the data.

7. Management of migration, asylum and border control:

a) AI systems intended to be used by competent public authorities such as polygraphs and similar tools or to detect the emotional state of a natural person

b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, an irregular immigration risk or a health risk, posed by a natural person intending to enter on or has entered the territory of a Member State;

c)AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation for natural persons and detect inauthentic documents by checking their security elements

d) AI systems intended to assist competent public authorities in processing applications for asylum, visas and residence permits and associated complaints regarding the eligibility of the natural persons applying for a status.

8. Administration of justice and democratic processes:

a)AI systems intended to assist a judicial authority in examining and interpreting the facts and the law and in applying the law to a specific set of facts.

APPENDIX IV. TECHNICAL DOCUMENTATION as referred to in Article 11, paragraph

The technical documentation referred to in Article 11, paragraph 1, must contain at least the following information, depending on what is relevant for the relevant AI system:

1. A general description of the AI ​​system, including:

(a) its intended purpose, the person(s) developing the system, the date and version of the system

(b)how the AI ​​system interacts or can be used to interact with hardware or software that is not part of the AI ​​system itself, where applicable;

(c) the versions of relevant software or firmware and any requirements related to version updating;

d) the description of all forms in which the AI ​​system is brought into circulation or put into use

e) the description of hardware on which the AI ​​system is intended to run

(f)where the AI ​​system is a component of products, photographs or illustrations showing external features, labeling and internal layout of those products

(g) user manuals and, where applicable, installation instructions;

2. A detailed description of the elements of the AI ​​system and of the process of its development, including:

a) the methods and steps carried out for the development of the AI ​​system, including, where applicable, the use of pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider

(b) the design specifications of the system, namely the general logic of the AI ​​system and the algorithms the main design choices, including the rationale and assumptions, including with respect to persons or groups of persons on whom the system is intended to be used; the main classification choices; what the system is designed to optimize for and the relevance of the various parameters; the decisions on any possible trade-offs made regarding the technical solutions adopted to meet the requirements of Title III, Chapter 2;

(c) the description of the system architecture, which explains how software components build on or integrate with each other and integrate into the overall processing the computational resources used to develop, train, test and validate the AI ​​system;

d) where relevant, the data requirements in the form of data sheets describing the training methods and techniques and the training datasets used, including information on the provenance of these datasets, their scope and main characteristics, how the data were obtained and selected; labeling procedures (eg for supervised learning), data cleaning methods (eg detection of outliers);

(e) assessment of the human oversight measures necessary in accordance with Article 14, including an assessment of the technical measures necessary to facilitate users' interpretation of the output of AI systems, in accordance with Article 13(1);

(f)where applicable, a detailed description of predetermined changes to the AI ​​system and its performance together with all relevant information regarding the technical solutions adopted to ensure continuous compliance of the AI ​​system with the relevant requirements of Section III; chapter 2;

g) the validation and testing procedures used, including information on the validation and test data used and their main characteristics metrics used to measure accuracy, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2, as well as potentially discriminatory effects; test logs and all test reports dated and signed by the responsible persons, including with regard to predetermined changes as mentioned under letter f).

3. Detailed information about the monitoring, operation and control of the AI ​​system, in particular with regard to: its capabilities and limitations in performance, including degrees of accuracy for specific individuals or groups of individuals on which the system is intended to be used and the overall expected level of accuracy relative to its intended purpose; the foreseeable unintended results and sources of risks to health and safety, fundamental rights and discrimination in light of the intended purpose of the AI ​​system; the human oversight measures necessary in accordance with Article 14, including the technical measures put in place to facilitate users' interpretation of the output of AI systems input data specifications, as applicable;

4.A detailed description of the risk management system in accordance with Article 9;

5. A description of any changes to the system throughout its life cycle;

6. A list of the harmonized standards used in whole or in part, whose references are published in the Official Journal of the European Union; where such harmonized standards have not been applied, a detailed description of the solutions chosen to meet the requirements of Title III, Chapter 2, including a list of other relevant standards and technical specifications applied.

7. A copy of the EU Declaration of Conformity;

8.A detailed description of the system in place to evaluate the performance of the AI ​​system in the post-marketing phase in accordance with Article 61, including the post-marketing monitoring plan referred to in Article 61(2).

APPENDIX V. EU DECLARATION OF CONFORMITY

The EU declaration of conformity referred to in Article 48 must contain all of the following information:

1.AI system name and type and any additional unambiguous reference enabling the identification and traceability of the AI ​​system;

2. Name and address of the provider or, where applicable, its authorized representative;

3. A statement that the EU declaration of conformity has been issued under the sole responsibility of the provider;

4. A statement that the AI ​​system in question complies with this Regulation and, if applicable, with any other relevant EU legislation that allows for the issue of an EU declaration of conformity;

5. References to relevant harmonized standards or any other common specification in relation to which conformity is declared;

6.If applicable, name and identification number of the authorized body, a description of the conformity assessment procedure carried out and identification of the certificate issued.

7.Place and date of issuance of the statement, name and function of the person who signed it, as well as an indication for, and on behalf of, this person signed the signature.

APPENDIX VI. CONFORMITY ASSESSMENT PROCEDURE BASED ON INTERNAL CONTROL

1. The conformity assessment procedure based on internal control is the conformity assessment procedure based on points 2 to 4.

2. The provider verifies that the established quality management system complies with the requirements of Article 17.

3. The provider examines the information in the technical documentation to assess the AI ​​system's compliance with the relevant essential requirements in section III, chapter 2.

4. The provider also verifies that the design and development process of the AI ​​system and its post-market surveillance as referred to in Article 61 are in accordance with the technical documentation.

APPENDIX VII. CONFORMITY BASED ON ASSESSMENT OF QUALITY MANAGEMENT SYSTEM AND ASSESSMENT OF TECHNICAL DOCUMENTATION

1. Introduction

Conformity based on assessment of quality management system and assessment of the technical documentation is the conformity assessment procedure based on points 2 to 5.

2. Overview

The approved quality management system for the design, development and testing of AI systems in accordance with Article 17 shall be examined in accordance with point 3 and shall be subject to monitoring as specified in point 5. The technical documentation for the AI ​​system shall be examined in accordance with point 4 .

3. Quality management system

3.1. The provider's application must include:

a) the name and address of the provider and, if the application is submitted by the authorized representative, also their name and address

b) the list of AI systems covered by the same quality management system

c) the technical documentation for each AI system covered by the same quality management system

d) the documentation relating to the quality management system, which must cover all the aspects listed in Article 17;

e) a description of the procedures in place to ensure that the quality management system remains adequate and effective

f) a written declaration that the same application has not been submitted to any other authorized body.

3.2. The quality management system must be assessed by the authorized body, which determines whether it meets the requirements of Article 17.

The decision is communicated to the provider or his authorized representative.

The notice must contain the conclusions of the assessment of the quality management system and the reasoned assessment decision.

3.3. The quality management system as approved must continue to be implemented and maintained by the provider so that it remains adequate and effective.

3.4.Any intended change to the approved quality management system or the list of AI systems covered by the latter must be notified to the authorized body by the provider.

The proposed changes must be examined by the authorized body, which determines whether the changed quality management system continues to meet the requirements in point 3.2, or whether a reassessment is necessary.

The authorized body notifies the provider of its decision. The notification must contain the conclusions of the examination of the changes and the reasoned assessment decision.

4. Control of the technical documentation.

4.1.In addition to the application referred to in point 3, the provider must submit an application to an authorized body of its choice for the assessment of the technical documentation relating to the AI ​​system that the provider intends to market or put into use, and which is covered by the quality management system mentioned under point 3.

4.2. The application must contain:

a) name and address of the provider

b) a written declaration that the same application has not been submitted to any other authorized body

c) the technical documentation referred to in Annex IV.

4.3. The technical documentation must be examined by the authorized body. For this purpose, the notified body must have full access to the training and test datasets used by the provider, including through application programming interfaces (API) or other appropriate means and tools that enable remote access.

4.4.When examining the technical documentation, the notified body may require the provider to provide additional evidence or carry out additional tests to enable a proper assessment of the AI ​​system's compliance with the requirements of Section III, Chapter 2. Whenever the notified body does not is satisfied with the tests carried out by the provider, the notified body shall directly carry out appropriate tests, as appropriate.

4.5.If necessary to assess the compliance of the high-risk AI system with the requirements of Title III, Chapter 2, and upon a reasoned request, the authorized body shall also have access to the source code of the AI ​​system.

4.6. The decision is communicated to the provider or his authorized representative. The notice must contain the conclusions of the assessment of the technical documentation and the reasoned assessment decision.

If the AI ​​system complies with the requirements of Title III, Chapter 2, the notified body shall issue an EU assessment certificate for technical documentation. The certificate must indicate the name and address of the provider, the conclusions of the examination, the conditions (if any) for its validity and the necessary data for the identification of the AI ​​system.

The certificate and its appendices shall contain all relevant information to enable the conformity of the AI ​​system to be assessed and to allow for control of the AI ​​system while in use, where relevant.

If the AI ​​system does not comply with the requirements of Title III, Chapter 2, the notified body shall refuse to issue an EU technical documentation assessment certificate and shall inform the applicant accordingly, giving detailed reasons for the refusal.

If the AI ​​system does not meet the requirement regarding the data used to train it, retraining of the AI ​​system will be necessary prior to applying for a new conformity assessment. In this case, the reasoned assessment decision of the notified body refusing to issue the EU technical documentation assessment certificate shall include specific considerations on the quality data used to train the AI ​​system, in particular on the reasons for non-compliance.

4.7.Any change to the AI ​​system that may affect the AI ​​system's compliance with the requirements or its intended purpose must be approved by the notified body that issued the EU technical documentation assessment certificate. The provider must inform such authorized body of its intention to introduce any of the above changes, or if it otherwise becomes aware of the occurrence of such changes. The intended changes must be assessed by the authorized body, which determines whether these changes require a new conformity assessment in accordance with Article 43, paragraph 4, or whether they can be processed using a supplement to the EU assessment certificate for technical documentation. In the latter case, the notified body must assess the changes, inform the provider of its decision and, if the changes are approved, issue to the provider an addendum to the EU assessment certificate for technical documentation.

5. Monitoring of the approved quality management system.

5.1. The purpose of the monitoring carried out by the authorized body referred to in point 3 is to ensure that the provider properly meets the terms and conditions of the approved quality management system.

5.2.For assessment purposes, the provider must give the authorized body access to the premises where design, development, testing of the AI ​​systems takes place. The provider also shares all necessary information with the authorized body.

5.3.The authorized body must carry out periodic audits to ensure that the provider maintains and applies the quality management system, and must provide the provider with an audit report. In connection with these audits, the authorized body may carry out additional tests of the AI ​​systems for which an EU assessment certificate for technical documentation has been issued.

APPENDIX VIII. INFORMATION TO BE SUBMITTED WHEN REGISTERING HIGH RISK AI SYSTEMS IN ACCORDANCE WITH ARTICLE 51

The following information shall be provided and then kept up to date in respect of high-risk AI systems to be registered in accordance with Article 51.

1. the name, address and contact details of the provider;

2.If the submission of information is carried out by another person on behalf of the provider, that person's name, address and contact details;

3. Name, address and contact details of the authorized representative, where applicable;

4. The trade name of the AI ​​System and any additional unambiguous reference enabling the identification and traceability of the AI ​​System;

5.Description of the intended purpose of the AI ​​system;

6. Status of the AI ​​system (on the market or in use; no longer on the market/operational, withdrawn);

7. Type, number and expiry date of the certificate issued by the notified body and the name or identification number of the notified body, if applicable.

8. A scanned copy of the certificate referred to in point 7, when applicable;

9. Member States where the AI ​​system is or has been placed on the market, put into use or made available in the Union

10. A copy of the EU declaration of conformity referred to in Article 48;

11. Electronic user manual; this information shall not be provided for high-risk AI systems in the areas of law enforcement and migration, asylum and border control, as referred to in Annex III, points 1, 6 and 7.

12.URL for additional information (optional).

APPENDIX IX. UNION LEGISLATION ON LARGE IT SYSTEMS IN THE AREA OF FREEDOM, SECURITY AND JUSTICE

1.Schengen information system

a) Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28 November 2018 on the use of the Schengen Information System for the return of illegally staying third-country nationals (OJ L 312 of 7.12.2018, p. 1).

b) Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) within border control and on the amendment of the Convention on the Implementation of the Schengen Agreement and on the amendment and repeal of Regulation (EC) No. 1987/2006 (OJ L 312 of 7.12.2018, p. 14)

c) Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) within police cooperation and judicial cooperation in criminal matters, amending and repealing Council Decision 2007/533/ RIA and on the repeal of Regulation (EC) No. 1986/2006 of the European Parliament and of the Council and Commission Decision 2010/261/EU (OJ L 312 of 7.12.2018, p. 56).

2.View information system

(a) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amending Regulation (EC) No 767/2008, Regulation (EC) No 810/2009, Regulation (EU) 2017/2226, Regulation (EU) 2016/399, Regulation XX/2018 [the interoperability regulation] and Decision 2004/512/EC and repealing Council Decision 2008/633/JHA – COM(2018) 302 final. Will be updated when the regulation is adopted (April/May 2021) by the co-legislators.

3. Eurodac

a) Amended proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing "Eurodac" for the comparison of biometric data for the effective application of Regulation (EU) XXX/XXX [Regulation on Asylum and Migration Management] and in Regulation (EU) XXX /XXX [Resettlement Regulation], for the identification of an illegally staying third-country national or stateless person and on requests for comparison with Eurodac data from Member States' law enforcement authorities and Europol for law enforcement purposes and amending Regulations (EU) 2018/1240 and (EU) 2019/ 818 – COM(2020) 614 final.

4.Entry/Exit System

(a) Regulation (EU) 2017/2226 of the European Parliament and of the Council of 30 November 2017 establishing an Entry/Exit System (EES) for recording entry and exit data and refusing entry data for third-country nationals crossing the external border of the Member States borders and determining the conditions for access to the entry and exit system for law enforcement purposes and amending the Convention on the implementation of the Schengen Agreement and Regulation (EC) No. 767/2008 and (EU) No. 1077/2011 (OJ L 327 of 9.12. .2017, p. 20).

5. The European Travel Information and Authorization System

a) Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Travel Information and Authorization System (ETIAS) and amending Regulation (EU) No. 1077/2011, (EU) No. 515/ 2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236 of 19.9.2018, p. 1).

b) Regulation (EU) 2018/1241 of the European Parliament and of the Council of 12 September 2018 amending Regulation (EU) 2016/794 with a view to establishing a European Travel Information and Authorization System (ETIAS) (OJ L 236, 19.9. 2018, p. 72).

(Video) The EU AI Act: State Of Play

6. The European criminal record information system on third-country nationals and stateless persons

a) Regulation (EU) 2019/816 of the European Parliament and of the Council of 17 April 2019 establishing a centralized system for the identification of Member States holding criminal record information on third-country nationals and stateless persons (ECRIS-TCN) as a supplement to the European information system for criminal records and amendment of Regulation (EU) 2018/1726 (OJ L 135 of 22.5.2019, p. 1).

7. Interoperability

a) Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on the establishment of a framework for interoperability between EU information systems within borders and visas (OJ L 135 of 22.5.2019, p. 27).

b) Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on the establishment of a framework for interoperability between EU information systems in the field of police cooperation and judicial cooperation, asylum and migration (OJ L 135 of 22.5. 2019, p .85).

FAQs

Has the EU AI Act passed? ›

Passed 84 to 7 (with 12 abstentions), the EU's Artificial Intelligence Act places a number of gradually stricter rules on AI providers based on the system's perceived level of risk.

What are the main points of the EU AI Act? ›

The Artificial Intelligence Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

What is the European Commission proposal for the AI Act? ›

The Commission proposes as well to adopt different set of rules tailored on a risk-based approach with four levels of risks: Unacceptable risk AI. Harmful uses of AI that contravene EU values (such as social scoring by governments) will be banned because of the unacceptable risk they create; High-risk AI.

What are the unacceptable risks of the EU AI Act? ›

Unacceptable: Applications that comprise subliminal techniques, exploitative systems or social scoring systems used by public authorities are strictly prohibited. Also prohibited are any real-time remote biometric identification systems used by law enforcement in publicly-accessible spaces.

Does EU law apply to US? ›

EU regulations apply automatically and simultaneously in all Member States since they enter into force and do not require prior national transposition.

Is AI going to replace US? ›

AI won't entirely replace humans any time soon, industry experts and companies investing in the technology say. But jobs are transforming as AI becomes more accessible.

What is Article 5 of the EU AI Act? ›

The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the 'real-time' remote biometric identification system at issue is necessary for and proportionate to achieving one of the ...

What are the 3 most important characteristics of an AI program? ›

Top Characteristics of Artificial Intelligence. Apart from the core three characteristics of AI such as Feature engineering, Artificial Neural Networks and Deep Learning, other characteristics unveil the maximum efficiency of this technology.

Do we need AI regulation? ›

Why do we need rules on AI? The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

What practices are prohibited by the EU AI Act? ›

AI systems with an unacceptable level of risk to people's safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people's vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, ...

Who regulates AI in the US? ›

The Office of the Under Secretary of State for Arms Control and International Security focuses on the security implications of AI, including potential applications in weapon systems, its impact on U.S. military interoperability with its allies and partners, its impact on stability, and export controls related to AI.

What is Article 52 of the EU AI Act? ›

Article 52(1) of the draft EU AI Act says that “Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.”1 ...

What are 3 negative impacts of AI on society? ›

These negative effects include unemployment, bias, terrorism, and risks to privacy, which the paper will discuss in detail.

What is the biggest threat of AI? ›

Risks of Artificial Intelligence
  • Automation-spurred job loss.
  • Privacy violations.
  • Deepfakes.
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Market volatility.
  • Weapons automatization.
Aug 11, 2022

What are the three limitations of AI today? ›

High Costs

The ability to create a machine that can simulate human intelligence is no small feat. It requires plenty of time and resources and can cost a huge deal of money. AI also needs to operate on the latest hardware and software to stay updated and meet the latest requirements, thus making it quite costly.

Can US citizens live in the EU? ›

Yes, Americans can move to Europe. There are a variety of options available, with Golden Visas, Digital Nomad Visas, and other residency schemes available.

Can an American become an EU citizen? ›

How to get a European Passport as an American citizen? There are several ways of claiming or applying for citizenship in Europe as an American: You can have citizenship by descent, by naturalization, by investment, or by exception. Citizenship by descent requires you to have a family history tied to the second nation.

Can an American be an EU citizen? ›

Anyone is eligible for a European passport, provided that they pursue one of the following three options: Get European citizenship through descent or ancestry. Get EU citizenship through naturalization. Apply for a European passport through a citizenship-by-investment program.

Is the US still the leader in AI? ›

The United States and China are vying for global leadership in AI, a technology that is transforming political, economic, and military power. The U.S. currently leads in AI, but China is rapidly catching up and has declared its intent to be the global leader by 2030.

What AI can't replace? ›

What Jobs AI Can't Replace?
  • Chief Executive Officers (CEOs) Even the job of an entrepreneur is one of those who will hardly see robots instead of men. ...
  • Lawyers. ...
  • Graphic Designers. ...
  • Editors. ...
  • Computer Scientists and Software Developers. ...
  • PR Managers. ...
  • Event Planners. ...
  • Marketing Managers.
Mar 30, 2023

What jobs will not be affected by AI? ›

Trade jobs like plumbers, electricians, and HVAC technicians are not going to be replaced by AI. For example, a plumber needs to visit sites to install plumbing or fix pipes.

What is Article 13 of the EU AI Act? ›

High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.

What is Article 13 of the AI Act? ›

Article 13 of the AI Act requires that high-risk AI systems should be designed and developed in such a way that their operation is sufficiently transparent so that users can interpret the system's output and use it appropriately.

What is Article 10 of the AI Act? ›

Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used.

What are the 3 C's of AI? ›

Any Intelligent system has three major components of intelligence, one is Comparison, two is Computation and three is Cognition. These three C's in the process of any intelligent action is a sequential process.

What are the 3 rules of an AI? ›

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What are the 4 major categories of AI? ›

4 main types of artificial intelligence
  • Reactive machines. Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output. ...
  • Limited memory. The next type of AI in its evolution is limited memory. ...
  • Theory of mind. ...
  • Self-awareness.
Jan 12, 2023

What did Elon Musk say about AI? ›

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...

Is AI replacing human requirement? ›

While AI is designed to replace manual labor with a more effective and quicker way of doing work, it cannot override the need for human input in the workspace. In this article, you will see why humans are still immensely valuable in the workplace and cannot be fully replaced by AI.

Why is AI difficult to regulate? ›

However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits might be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.

What are the EU four ethical principles regarding AI systems? ›

✓ Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability.

Do the EU's guidelines for trustworthy AI have the force of law? ›

To that end, the EU ethics guidelines promote a trustworthy AI system that is lawful (complying with all applicable laws and regulations), ethical (ensuring adherence to ethical principles and values) and robust (both from a technical and social perspective) in order to avoid causing unintentional harm.

Which country owns AI? ›

ai is the Internet country code top-level domain (ccTLD) for Anguilla, a British Overseas Territory in the Caribbean. It is administered by the government of Anguilla.

Does the US government use AI? ›

The United States government uses artificial intelligence in the military, intelligence, and law enforcement to help mitigate potential threats. However, the use of machine learning technology largely remains unregulated by the government, although year-on-year spending on AI government contracts continues to increase.

Who is the current leader of AI? ›

IBM is a leader in the field of artificial intelligence.

What is Section 35 of the AI Act? ›

The Commission shall assign an identification number to notified bodies. It shall assign a single number, even where a body is notified under several Union acts.

What is Article 81 EU law? ›

(a) impose on the undertakings concerned restrictions which are not indispensable to the attainment of these objectives; (b) afford such undertakings the possibility of eliminating competition in respect of a substantial part of the products in question.

What is Article 47 of the EU Charter? ›

Everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal previously established by law. Everyone shall have the possibility of being advised, defended and represented.

What are the real dangers of AI? ›

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

Why AI is not harmful to humans? ›

The AI that we use today is exceptionally useful for many different tasks. That doesn't mean it is always positive – it is a tool which, if used maliciously or incorrectly, can have negative consequences. Despite this, it currently seems to be unlikely to become an existential threat to humanity.

Does AI pose a threat to humanity? ›

AI could pose a threat to humanity's future if it has certain ingredients, which would be superhuman intelligence, extensive autonomy, some resources and novel technology,” says Ryan Carey, a research fellow in AI safety at Oxford University's Future of Humanity Institute.

What is Elon Musk worried about with AI? ›

Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity. They have signed an open letter warning of potential risks, and say the race to develop AI systems is out of control.

Is AI a threat to us or is AI helpful to us? ›

AI could help people with improved health care, safer cars and other transport systems, tailored, cheaper and longer-lasting products and services. It can also facilitate access to information, education and training. The need for distance learning became more important because of the Covid-19 pandemic.

Can AI become self aware? ›

The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans. DeepMind is an AI research lab that was co-founded in 2010 by Demis Hassabis.

How far can AI take us? ›

In a paper published last year, titled, “When Will AI Exceed Human Performance? Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.

What is black box in AI? ›

Black box AI is any artificial intelligence system whose inputs and operations aren't visible to the user or another interested party. A black box, in a general sense, is an impenetrable system. Black box AI models arrive at conclusions or decisions without providing any explanations as to how they were reached.

What is the most common problem with AI solution? ›

These are the most common problems with AI development and implementation you might encounter and ways in which you can manage them:
  1. Determining the right data set. ...
  2. The bias problem. ...
  3. Data security and storage. ...
  4. Infrastructure. ...
  5. AI integration. ...
  6. Computation. ...
  7. Niche skillset. ...
  8. Expensive and rare.
Jun 2, 2022

What is the timeline EU AI regulation? ›

European Standards Organisations

20th May 2022: Released first draft standardisation request in support of safe and trustworthy AI. End of June 2022: Draft sent back with amendments and requests for clarification. 5th December 2022: Published draft based on ESO and stakeholder consultation.

Is Europe behind on AI? ›

Europe has fallen behind China, the US and the UK when it comes to AI skills. With ARISA, an Erasmus+ project led by DIGITALEUROPE, twenty leading organisations now intend to reduce the AI skills gap within the next four years.

Does EU copyright law protect AI assisted output? ›

As long as the output reflects creative choices by a human being at any stage of the production process, AI-assisted output is likely to qualify for copyright protection as a “work”.

What is the new EU Cybersecurity Act? ›

The EU Cybersecurity Act

The Cybersecurity Act unifies the EU's cybersecurity into a single framework, with ENISA as its main core. What this means is that ENISA can now contribute in operational cooperation and crisis management across the EU with an EU-wide certification scheme that will: build trust.

When AI rule the world? ›

When AI Rules the World is an investigation and call to action into AI technologies for a nation that does not yet comprehend the full gravity of the AI revolution. The United States is losing the race for AI dominance, and the stakes couldn't be higher.

What are the 5 stages of AI cycle? ›

It mainly has 5 ordered stages which distribute the entire development in specific and clear steps: These are Problem Scoping, Data Acquisition, Data Exploration, Modelling and Evaluation.

What will be the latest date to perform the transition to the EU regulation regarding a study running under the clinical trial directive? ›

On 31 January 2022, the Clinical Trials Regulation (CTR) will come into application harmonising the submission, assessment and supervision processes for clinical trials in the European Union (EU). The backbone of the changes brought about by the CTR is the new Clinical Trials Information System (CTIS).

Is the US the world leader in AI? ›

The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.

Is China ahead in AI? ›

The U.S. currently leads in AI, but China is rapidly catching up and has declared its intent to be the global leader by 2030. To stay ahead of China in AI, the U.S. will need to work with China.

What country ends in AI? ›

ai is the Internet country code top-level domain (ccTLD) for Anguilla, a British Overseas Territory in the Caribbean.

Can an AI own copyright in the US? ›

US Copyright Office: AI Generated Works Are Not Eligible for Copyright.

What are the three cybersecurity regulations in the US? ›

The three main cybersecurity regulations are the 1996 Health Insurance Portability and Accountability Act (HIPAA), the 1999 Gramm-Leach-Bliley Act, and the 2002 Homeland Security Act, which included the Federal Information Security Management Act (FISMA).

What are the two new tech laws that have been approved in the EU? ›

Driving the news: Europe's two new laws — the Digital Markets Act (DMA) and the Digital Services Act (DSA) — place tough constraints on how big tech standard-bearers like Apple, Amazon, Alphabet and Meta handle competition and online content.

What is the EU toolbox on 5G cybersecurity? ›

What is the EU toolbox on 5G Cybersecurity about? The objective of the EU toolbox on 5G Cybersecurity is to setout a coordinated European approach based on a common set of measures, aimed at mitigating the main cybersecurity risks of 5G networks that were identified in the EU coordinated risk assessment report.

Videos

1. THE AI ACT: WHERE ARE WE, AND WHERE ARE WE GOING?
(CPDPConferences)
2. Webinar on the Digital Services Act Package: Gatekeepers in the DSA Package: What about VoD?
(European Audiovisual Observatory)
3. Horizon Europe Info Days 2021 | Cluster 4 | Destination 6
(EU Science & Innovation)
4. Digital Geopolitics in the Asia-Europe Space: Issues, Actors, and Divides. Dr. Jovan Kurbalija
(Centre for EU-Asia Connectivity (CEAC) - RUB)
5. Webinar: Why Is Big Tech So Afraid of the EU's Digital Markets Act and Digital Services Act
(Public Citizen's Global Trade Watch)
6. How can tech companies reinforce and strengthen European democracy?
(EURACTIV)

References

Top Articles
Latest Posts
Article information

Author: Cheryll Lueilwitz

Last Updated: 06/08/2023

Views: 5636

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.