Contributed By Beccar Varela
Argentina does not have a specific legal framework or a general background law on AI, nor are there currently any bills being discussed in Congress on the specific matter. As a result, the existing legal framework is to be relied upon to deal with any AI-related issues, including the Civil and Commercial Code, approved by Law No 26,994 of 1 August 2015, which is the legal backbone for civil and commercial relationships and sets forth the general principles that will apply to the regulation of AI.
Moreover, when AI is involved in a consumer-provider relationship, the Consumer Protection Law No 24.240 is to be complied with.
The intellectual property legal framework in Argentina is governed on the one hand by Patent Law No 24,481, its Implementing Decree No 260/96 and the Guidelines on Patenting issued by the Patent Office. On the other hand, the general regime of copyright law is governed by Law No 11,723, applicable to works of authorship including software. These two laws are to be considered when determining ownership of AI.
Likewise, since AI is a data-driven technology, the Data Protection Law No 25,326 and related Argentine data protection regulations (ADPR) are also at the forefront of the applicable regulations.
Also, the Anti-Discrimination Law No 23,592, which adopts measures for those who arbitrarily prevent the full exercise of the fundamental rights and guarantees recognised in the National Constitution, must necessarily be addressed when dealing with AI.
Additionally, there are several regulations on economic incentives for technology, such as Law No 27.506 for the Promotion of Knowledge Economy, as amended by Law No 27,550 (“Promotion of Knowledge Economy Law”), Law No 23,877 on the Promotion and Encouragement of Technical Innovation, Law No 25,613 on the Import Regime for Scientific and Technological Research Supplies and Entrepreneurship Law No 27,349.
Industry Innovation
Among the key industry applications of AI, those which stand out for their rate of growth are related to health, the financial sector, legal services, the agricultural food industry, the public sector, environmental and the satellite industry.
In the field of medical attention, Amanda Care, a local start-up company, developed a tool for the virtual assistance of patients that optimises monitoring and follow-up by medical staff using already existing messaging tools. Amanda Care uses machine learning to maintain a natural conversation with patients and identify the best contact strategy according to their preferences. Likewise, Entelai is a renowned Argentine company that performs automated analysis of medical images using AI, providing physicians and patients with standardised, easy-to-read reports that are a key input for more accurate detection of abnormalities in patients with conditions such as multiple sclerosis.
In the field of agriculture and environmental matters, Kilimo, with a subsidiary in Argentina, offers solutions to optimise freshwater use in agriculture to avoid water waste through customised irrigation programmes, which in turn reduces costs. The Kilimo’s machine learning model allows agricultural producers to estimate water consumption for a seven-day cultivation based on field data, satellite images and large databases with historical data, and offers periodic advice on the needed amount of irrigation.
Also, Dymaxion Labs, another start-up company based in Buenos Aires, offers a range of solutions based on AI that process large amounts of geospatial data in satellite images and climate data by using IoT sensors and machine learning to provide insights on agriculture, urban development and climate change for the process of decision-making.
When it comes to tackling climate change, the Argentinian start-up Pachama combines machine learning with satellite and airborne observations to measure carbon captured in forests, with up to 90% accuracy in comparison to traditional models.
Another burgeoning branch of AI is risk calculation. This has been utilised by financial risk assessment and insurance companies as part of their strategy to use big data analytics to minimise risks by improving calculations, automation in claim management, risk assessment and fraud identification, providing faster client approval speed, online score management, segmentation of clients according to the customer historical behaviour and risk level to present a tailored offer.
In this vein, local company S4 has developed an AI that allows satellite measurement of a crop’s evolution during the season for various levels of geographic aggregation, comparing it with its own history and with the current and historical average of the area to know objectively how crops evolve during the season. It allows the visualisation of weather forecasts, high spatial resolution images, monitoring of agricultural surface water, configuration of alert systems and integration with a CRM. This allows satellite identification of the best technological approach for a specific lot (genotype, density, sowing dates, fertiliser doses, etc).
In the food industry, NotCo, a Chilean food company operating in Argentina which formulates plant-based products, has developed an AI program called Giuseppe which seeks to use and duplicate the molecular composition of animal foods to determine the vegetables that would create a food with similar taste, texture and smell.
Innovation in Legal Services and the Judiciary System
There has also been progress in the application of AI in legal services. An Argentinian cognitive search engine, Sherlock Legal, has been developed to answer questions asked in natural language from the platform’s database of case law. Questions are syntactically analysed and interpreted by means of an algorithm to find the answer within the summaries of the rulings and to extract the most relevant related fragments. In addition to this, after selecting a court ruling of interest, the AI offers documents similar to the selected topic.
Along the same lines, the company Qanlex developed a system that filters millions of lawsuits to provide capital to pursue meritorious claims in all continental law countries, with a special focus on Latin America.
In the public sector, the Public Prosecutor's Office of the City of Buenos Aires developed Prometea, a virtual assistance system for the drafting of judicial documents. Prometea can detect the appropriate judicial response in an average of 20 seconds by reading the first and second instance rulings of the Judiciary of the City of Buenos Aires and analysing more than 300,000 documents. Through the search and detection of certain predefined keywords, it then generates a model opinion on how that file should be resolved.
Impact of COVID-19
During 2020, sanitary measures were implemented which were aimed at preventing/mitigating the spread of COVID-19. Within that context, several application software were developed and implemented in Argentina at the provincial and national levels, including the national application software, CuidAR.
In general, these application software collected the personal data of citizens, including identification data, a scan of the national identity document, as well as health data by asking self-diagnosis questions, and enabled the geolocation system on users' devices to allow the authorities to control compliance with mandatory isolation. In many cases, the use of these application software served as a vaccination certificate and/or health passport.
It is worth mentioning that these control apps have been highly criticised – not only for the threat they implied for the individual rights of citizens, firstly because in certain cases their use was mandatory as a condition to circulate in society, but also for their rapid implementation with little control carried out in advance, which resulted in many security flaws.
In this context, the Secretariat of Innovation and Digital Transformation of the City of Buenos Aires developed an artificial intelligence system called IATos, which allows to improve the coronavirus testing strategy and is available through the virtual assistant or chatbot of the City of Buenos Aires (“Boti”) that works through WhatsApp.
Through IATos people can record and send an audio of their cough. Once the audio is received, IATos analyses the sound; if it matches the patterns of positive cases, it recommends that the person be tested for COVID-19.
IATos works from an artificial intelligence neural network that is capable of classifying sounds of voice, breathing and coughing. It is based on machine learning algorithms. It currently has approximately 86% effectiveness in prediction. To train the recognition system of this network, 140,000 audios of people with positive or negative COVID-19 diagnoses were collected through Boti, according to PCR tests carried out in the testing centres of the city government. The database of positive and negative coughs collected by the City of Buenos Aires is the largest in the world and is open domain: the data is available so that anyone can know more about this type of systems and advance in similar projects.
All the above are examples of the many developments in AI that are present in Argentina. All in all, AI can be seen impacting many different industries and its growth is ever expanding and rapidly developing, despite the general lack of specific regulation.
As part of Argentina’s digital strategy, in 2016 the Executive Branch of the former administration of President Mauricio Macri (2015–19) developed the National Data Openness Plan (by means of Decree No 117/2016), which made its Open Data Portal available to all citizens. As of that date, the portal has more than 1,000 datasets with more than 30 public agencies covering government areas such as foreign affairs, economy, science, technology, agro-industry, energy, population and education, among many others.
Moreover, by means of Resolution No 11-E/2017 of the Secretariat of Information Technologies and Communications, the Big Data National Observatory (BDNO) was created. The BDNO is an interdisciplinary space that depends on the Undersecretariat of Telecommunications and Connectivity that seeks to promote the exchange of research and proposals related to the responsibility of internet intermediaries, the processing of personal data in digital environments and the socio-economic impacts derived from the use of automated information processing technologies.
The IA Plan
Within the frame of the Argentina Digital Agenda (2018) and the National Strategy of Science, Technology and Innovation “Argentina Innovadora 2030” (AI2030) in the orbit of the Ministry of Science, Technology and Innovation, in June 2019 the government released the National Artificial Intelligence Plan (the “IA Plan”) aimed to position Argentina as a regional leader through the generation of policies that contribute to sustainable growth and improving equal opportunities through AI technologies with an impact on the scientific, technological, socio-economic, political and productive matrix.
The IA Plan proposed a comprehensive and multi-stakeholder approach to encourage the development and adoption of AI in different sectors. Hence, the AI Plan established a commitment to work with the various areas of government, with actors of the productive network, the academic ecosystem, the scientific-technological system, civil society and international organisations.
Although Argentina has been engaged in efforts for the development of AI and the establishment of conditions for its implementation for some years now, the IA Plan has not undergone major advances since President Mauricio Macri left office.
The Argentina 4.0 Plan
It should be mentioned, however, that during the current administration of President Alberto Fernandez (2019 onwards), in April 2021 the Executive Branch published the Productive Development Plan Argentina 4.0 (the “Argentina 4.0 Plan”). This plan, an initiative of the National Ministry of Productive Development, aimed to promote the incorporation of technologies 4.0 – including AI – in the national production chain. The Argentina 4.0 Plan does not refer to the AI Plan at all.
Prior to that, in December 2019, Decree No 7/2019 created the Secretariat for Strategic Affairs to assist the President in the definition of strategic priorities that should translate into projects with international financing to provide tools at the service of productive and social development and the knowledge economy.
Later, in October 2021, Decree No 740/2021 was issued with the aim of, among others, adapting the objectives of the Secretariat of Strategic Affairs, setting the new list objectives. One of its key aims is to propose measures to promote the use of technologies such as AI in the public sector.
The Artificial Intelligence Programme
Moreover, in November 2021, by means of Resolution 90/2021 of the Secretariat of Strategic Affairs, the “Artificial Intelligence Programme” was created in the orbit of the National Directorate of Knowledge Management dependent on the Undersecretariat of Strategic Affairs of the Executive Branch, with the aim of providing support to the Economic and Social Council for the development of activities concerning the promotion of technological skills related to AI.
The AI Plan in the City of Buenos Aires
In the area of the Autonomous City of Buenos Aires, the Secretariat of Innovation and Digital Transformation of the Chief of Cabinet of Ministers also has an AI plan. Its main aims are taking advantage of the benefits that AI offers to the city, the consolidation of the use of AI in industry and government, and the mitigation of the risks derived from the use of AI by promoting its development with a focus on ethical and legal principles.
The national security issue is not specifically addressed in the current executive policies.
There is no specific regulation applicable to AI in Argentina. However, there are pieces of legislation enacted with the aim of financing and promoting the development and implementation of AI and other technologies.
The Promotion of Knowledge Economy Law was passed in 2019 to promote economic activities that apply the use of knowledge and the digitalisation of information supported by the advances in science and technology to obtain goods, provide services and/or improve processes, and was preceded by Law No 25,922 on the Promotion of Software. However, the new law broadens the promoted activities to include software development, computer and digital services, artificial intelligence, robotics and industrial internet of things, internet of things, sensors, additive manufacturing, augmented and virtual reality.
Upon compliance with certain requirements, the promoted activities enjoy tax concessions in relation to all national taxes, a reduction on the general corporate income tax rate applicable to both Argentine and foreign-source profits to the extent the beneficiaries maintain the payroll, as well as other tax benefits.
In this respect, by means of the Promotion and Encouragement of Technical Innovation Law, the government has also implemented an optional increase of the tax credit, consisting of an additional computable amount, based on the investments made by the beneficiaries within the framework of the innovation, research and technological projects developed by technological linking units authorised under the provisions of said law. This also applies to organisations or entities registered in the Registry of Scientific and Technological Organisations and Entities, created by Law No 25,613 on the Import Regime for Scientific and Technological Research Supplies, which prove technical capabilities related to the development of the sectorial activity. Through the Ministry of Production and Labour’s Resolution No 47/2019, AI development is listed as one of the technologies that would enable a beneficiary to access this fiscal bond.
There is no proposed legislation being currently discussed in Congress.
Nonetheless, it is worth noting that the IA Plan included the regulation of AI as one of its objectives. In this connection, the IA Plan proposed to use a public sandbox as a tool to overcome regulatory barriers to innovation during the exploration phases, allowing the proposed systems to be tested in real-life situations to analyse benefits and disadvantages, enabling interaction between the public and private sectors, and allowing the intervention of regulatory agencies to carry out an adequate impact and feasibility analysis of the projects.
The role of industry in shaping legislation would be very significant considering that Argentina has a productive ecosystem around AI that includes the presence of the main multinational technology companies operating in Argentina, Argentine unicorn companies, SMEs and start-ups. However, AI regulation does not seem to be the priority of these companies so far.
At a national level, the Secretariat of Public Innovation of the Chief of the Cabinet of Ministers has the role of designing, proposing and co-ordinating the policies of transformation and modernisation of the state in the different areas of the national government, its central and decentralised administration, and determining the strategic guidelines and the proposal of the regulations to achieve inclusive development through digital transformation with a gender focus, including digital inclusion, skills for the jobs of the future, digital government, SMEs and entrepreneurship, and Industry 4.0 as central themes of its work.
The National Committee of Ethics in Science and Technology (Comité Nacional de Ética en la Ciencia y la Tecnología, CECTE) was created in April 2001 by the Secretariat of Science, Technology and Productive Innovation (now the Ministry of Science, Technology and Innovation) as a central part of its digital strategy to analyse the ethical problems around the use of new technologies and law and public policy proposals for new technology developments. As part of its tasks, the CECTE analyses ethical issues in all fields of research, including the ethical values that concern the work of researchers and research institutions, as well as the training of future scientists, the evaluation of policy projects, laws and regulations involving scientific research and new technologies from a perspective of ethics in science.
The Secretariat for Strategic Affairs (Secretaría de Asuntos Estratégicos), created to assist the President in the definition of strategic priorities, is in charge of proposing measures to promote the use of technologies such as AI in the public sector. The Secretariat is also the creator and administrator of the Artificial Intelligence Programme, with the aim of providing support to the Economic and Social Council for the development of activities related to the promotion of technological skills related to AI.
In the City of Buenos Aires, the agency in charge of the local IA Plan is the Secretariat of Innovation and Digital Transformation of the Chief of the Cabinet of Ministers.
The AI Plan described AI as the discipline that focuses on the development of computational systems capable of carrying out tasks that would normally require human intelligence, such as visual and voice recognition, decision making or translation across different languages. According to the AI Plan, the AI moves from conventional deterministic computing to the solution of non-deterministic problems of greater complexity, allowing pattern recognition and link data in open and dynamic environments – what is known as machine learning. The ability to interpret unstructured information combined with a significant increase in processing capabilities give rise to this new revolution in which computers interact directly with humans on a peer-to-peer basis. AI can be applied to a variety of areas, including expert systems, natural language recognition, image recognition, big data and robotics.
In addition, the Ministry of Labour and Production, in its Resolution No 47/2019, refers to AI as a technology based on the development of algorithms that allow computers to process data at an unusual speed (a task that previously required several computers and people), also achieving automatic learning.
Finally, the Argentina 4.0 Plan of the current administration defines AI as the development of computational models with algorithms capable of processing information at high speed in an adaptive and automatic way, since they are mutating and evolving to the extent that the data they incorporate adds new information. In this way, they consolidate autonomous learning systems with characteristics of the human intellect. These algorithms "learn" with the information they incorporate and improve their predictions and responses. AI algorithms are used for predictive models for decision making, facial recognition and natural language processing, among other applications.
In the absence of an AI-specific regulation, the regulatory objectives can be found in both plans published by the Executive Branch of the former administration (the AI Plan) and the present one (the Argentina 4.0 Plan).
Objectives of the AI Plan
Objectives of the Argentina 4.0 Plan
The matter is not applicable in this jurisdiction.
The matter is not applicable in this jurisdiction.
Within the National Office of Information Technology (Oficina Nacional de Tecnologías de Información,ONTI), the Directorate of Technological Standards (DTA) is responsible for issuing technical opinion on the technological projects initiated by the agencies of the National Public Sector (NPS) at all levels.
The ONTI advises the NPS in the elaboration of their technological projects and provide training courses together with National Institute of Public Administration (Instituto Nacional de la Administración Pública,INAP), elaborating the Technological Standards for Public Administration (Estándares Tecnológicos para la Administración Pública Nacional, ETAP).
The ETAP standards are based on the study of information technologies available in the market and their applicability in technological innovation projects carried out in the NSP agencies, to promote the standardisation and interoperability of state information systems.
The technological innovation department within ONTI is constantly researching new information and telecommunications technologies that can be applied in projects of the National Public Administration to optimise public management.
Up to date, there are no standards defined by private entities or certification authorities applicable to AI.
Although there is no specific regulation in Argentina addressing algorithmic bias, there are certain scenarios (especially in some industries) where such bias can be deemed as an act of discrimination in violation of the fundamental rights of consumers, as set forth in the Argentine National Constitution and the Consumer Protection Law.
In that sense, both the National Constitution in its Section 42 and the Consumer Protection Law in its Section 8 bis, grant the consumers the right to receive fair and equitable treatment. Therefore, any kind of discrimination or subjective differentiation between consumers (by race, gender, sexual orientation, age, disability, social/economic background, etc) that are in objectively the same or a similar situation can be a potential risk and liability for the company exerting these differentiations.
Moreover, recently the Secretariat of Commerce (SC) issued Resolution No 139/20 acknowledging the rights of consumers to fair and equitable treatment and setting forth certain parameters to determine which consumers should be considered to be in a “hypervulnerability” situation, for reasons involving their age, gender, physical or mental status or social, economic, ethnic or cultural circumstances. Although such regulation does not include any specific burden to be undertaken by the companies, it sets forth the standard for administrative claims brought by these categories of consumers and their “hypervulnerability” is a concept that is starting to be picked up by courts in their rulings involving consumers.
In addition, by implementation of Resolution No 1033/2021, the SC issued the parameters to be followed for remote customer service and general communications with consumers, in which there are also included obligations of fair and equitable treatment as well as gender auto-perception respect, and the right of the consumer to request the attention of a natural person (by telephone, chat or email, etc).
In light of the foregoing, even when further investigation is required to determine or demonstrate certain bias in algorithms that are used in the environment of consumer contracts, products or services, companies ought to be careful with the parameters that are taken into account when profiling or evaluating their customers. In that sense, it is important to make sure that they are as objective as possible and refrain from including parameters that could be considered discriminatory if they do not have strong arguments and proof that they are absolutely necessary, directly linked with their business purpose and equitably applied across their customers.
Specifically, there are certain industries (eg, finance, banks, insurance, health, advertising) that heavily rely on profiling their customers to provide an efficient (and profitable) service. In those industries, these efforts must be especially focused on balancing the rightful interest of the company and the importance of profiling, paying special attention to not including or following parameters that could be considered discriminatory to consumers in the sense described above and, in cases where subjective parameters must be utilised (eg, for the nature of the contract), the company should have the proper justification with a direct and relevant link to the contract, as well as having proof that the same are uniformly applied to all of their customers in the same or similar conditions.
Definition of Biometric Data
The Data Protection Law defines personal data as information of any kind referring to determined or determinable individuals or legal entities. The DPA in its Resolution No 4/2019 defines biometric data as personal data obtained from a specific technical processing, relating to the physical, physiological, or behavioural characteristics of a human person, which allow or confirm their unique identification.
Consequently, biometric data (such as iris scans, fingerprints, voice patterns, scans of hand or face geometry) of Argentine data subjects will be considered personal data under the Data Protection Law. Furthermore, the DP Criteria states that biometric data will be considered sensitive data only when it can reveal additional data whose use may result in potential discrimination against the data subject.
Sensitive data has enhanced protection under Data Protection Law. Therefore, no one can be forced to provide sensitive data; however, the DPA has interpreted in various opinions that sensitive data can be processed if the interested party has voluntarily given prior, express and informed consent.
According to the Data Protection Law, personal data can be collected and processed only with the prior, express and informed consent of the data subject (with some exceptions). Consent must be given in writing or by other means that can be equated to writing, according to the circumstances. Thus, means of expressing consent other than in writing (eg, through electronic means) must produce and record sufficient evidence so that it can be demonstrated that consent was given in compliance with the formalities required by the Data Protection Law.
The DPA in its opinions Nos 242/05, 01/09, 02/09,15/09, 14/10 and 16/11 has considered the processing of biometric data lawful, provided it is strictly necessary for the intended purpose, verifying the non-infringement of the privacy of individuals and provided the provisions set forth in the Data Protection Law are complied with.
Anonymisation of Biometric Data
On another note, the Data Protection Law defines disassociation or anonymisation of data as any processing of personal data in such a way that the information cannot be associated with an individual. Personal data can be rendered anonymous by removing information that would allow the recipient to identify the data subject. Consequently, such information will not be considered “personal data” and, therefore, will not be shielded by the ADPR.
In this regard, the DP Criteria sets forth that a person will not be considered identifiable when the procedure to achieve its identification requires the application of disproportionate or unfeasible measures or deadlines. Thus, the concept of anonymisation or disassociation under the Data Protection Law does not necessarily imply that the data can no longer be associated to an identifiable person in an absolute irreversible manner, but that for data to be considered anonymous the process to re-identify the data subject to whom that data belongs should require unreasonable time, effort or resources.
For clarification purposes, please note that anonymisation is different from pseudonymisation, where data is managed and de-identified by procedures by which personally identifiable information fields within a data record are replaced by one or more artificial identifiers, or “pseudonyms”. Pseudonymised data can be restored to its original state with the addition of information which then allows individuals to be re-identified, while anonymised data can never be restored to its original state. Thus, pseudonymisation necessarily requires consent from the data subjects unless one of the exceptions to the consent rule apply.
Legal Issues
Thus, the lawful collection and processing of biometric data may be complex considering the rule of informed consent in the Data Protection Law. First, informed consent may be difficult to achieve considering that the purposes for which biometric data could be processed may be difficult to be determined at first and that it could be difficult to control the uses given to the biometric data.
In this connection, it is relevant to mention that some AI systems used to validate and authenticate the identity of the data subjects generally collect biometric data which may then be anonymised and retained to improve the system itself, such as updating and training the tool, for automatic biometric recognition.
Personal data or any other data, indicators or markers, privacy and context-sensitive information which may allow to revert the anonymisation process should not be kept without the proper data subject’s consent to that extent. Consequently, to keep anonymised biometric data anonymisation techniques should be implemented to remove or conceal personal identifiers to prevent identification (either directly or indirectly) of the data subjects.
Additionally, processing large amounts of data including anonymised biometric data presents a risk of re-identification for the association of the data being processed. Furthermore, the use of anonymised biometric data for the training or improvement of the tools may lead to discriminatory outcomes due to algorithm bias, provided no regular audits are conducted over the algorithms and their behaviour.
Despite being one of the most accurate identity validation mechanisms, the use of recognition techniques using biometric data can be dangerous and severely affect the rights of the individual, depending on who controls the information and for what purposes it is used. Lack of transparency in its operation hinders the possibility of tracing decisions and affects the possibility of control and accountability. Therefore, the design of a tool of these characteristics should contemplate privacy from the design and be audited during its use to control deviations or biases that can be generated in the algorithms as the tool is fed with new data.
From a consumer protection perspective, Resolution No 1033/21 of the SC grants consumers the right to get customer support from a natural person (whether physically, by email, chat or telephone). In this regard, the SCI prohibits the use of AI, bots, automatic responses to frequently asked questions, explanatory videos, answering machines or other similar methods as the only means of contact with consumers.
Irrespective of the above, there are no specific rules applying to interactions with consumers through chatbots or other technologies (as a replacement for natural persons).
In this regard, general rules of the Consumer Protection Law related to communications with consumers (by any means) would apply. In particular, under Section 4 of the Consumer Protection Law, providers have the duty to provide consumers with clear, complete and accurate information on the characteristics of the goods and services offered and marketing conditions. In this context, settings, malfunctions, or failures of the chatbot/technology that could result in the provision of incomplete, false or confusing information may result in a breach of the Consumer Protection Law by service providers.
Additional issues could arise when entering contracts with consumers through chatbots or other technologies in cases of ambiguous or non-standardised answers of consumers (eg, if the chatbot automatically moves forward and formalises the agreement, consumers could claim that consent was not properly granted).
In any case, provider will be liable against consumers for all information provided or actions/activities carried out by the chatbot/technology.
More generally, the AI Plan referred to "Explainable Artificial Intelligence" in which the result and reasoning by which the automated decision is reached can be understood by the human being. As far as AI systems for automated decision-making are concerned, transparency is key to being able to audit the decision-making process, inferences or assumptions and clear biases or false correlations.
AI developments based on machine learning use data as input – or raw material – that is processed through algorithms, to obtain an output – or result – (recommendations, predictions, decisions) based on statistical probabilities. Systems are trained with data until a model is achieved that effectively represents the reality that is to be reflected. Therefore, the selection of the training data set is highly relevant, both with respect to its quantity and quality, and so is the method of data processing because there could be biases in them that are transferred to the result.
The use of automated decision-making procedures based on personal data may present the risk of an erroneous or discriminatory outcome due to the existence of bias in the algorithm in the analysis of information. In this sense, compliance with the processing principles contained in ADPR that make the quality of the data from which the decision-making tool is fed is especially relevant.
ADPR establish, among other relevant principles, that:
The processing of data in violation of the principles of ADPR may make the data controller liable to administrative sanctions and/or legal actions of the data subjects and could also expose the data controller to complaints of discriminatory treatment.
Despite the lack of a specific legal framework, ADPR offer guidance regarding automated decision making. According to criterion No 2 of the DP Criteria, the data subjects shall have the right to request from the data controller an explanation of the rationale of the decision based solely on automated data processing that may negatively affect them. So, transparency is the key.
Certain AI techniques are characterised as "black boxes", where it is not possible to understand how systems infer the result. As these automated decision systems are based on a deep learning approach and complex neural networks, together with the existing biases in the data, the method through which they are collected or the way in which the algorithms are developed present an enormous risk of generating discrimination with respect to some social groups.
Provided the technology has biases, there is a high risk that the automated processing – without any instance of human revision – may likely lead to unfair outcomes reinforcing pre-existing prejudices against minority groups.
The mitigation of risk of biased technology was specifically addressed in the AI Plan, which stressed the need for an ethical vision to be incorporated from the design stage, training those who develop or research on these technologies on the existing regulations and the weight of the social consequences that they may cause and the difficulty of reversing them once these implementations have been deployed.
As far as liability issues are concerned, due to the lack of specific regulation on AI, the Civil and Commercial Code's rules regarding liability apply.
Sections 1757 and 1758 of the Civil and Commercial Code refer to liability derived from the intervention of things and from activities that are risky or dangerous by nature. According to these provisions, the owner and guardian of the thing (AI system) are concurrently responsible for the damage caused by the risk or defect of the thing, as well as for the activities that are risky or dangerous by nature, for the used means and for the circumstance of their execution. Liability is objective and the administrative authorisation for the use of the thing or the execution of the activity, or the fulfilment of the prevention techniques are not exonerating.
The guardian is defined as the one who exercises, by themself or through third parties, the use, the direction and the control of the thing, or to the one who benefits from it. In case of a risky or dangerous activity, the person who performs it, uses it or takes advantage of it, by themself or through third parties, is liable.
In view of the above, provided damage is caused by the risk or defect in the AI, or either the AI is considered an activity that is risky or dangerous by nature, the above provisions will apply.
Moreover, Section 1710 of the Civil and Commercial Code sets forth a general duty to prevent harm, which would require the person controlling the AI system to prevent it from causing an unjustified harm, to adopt the reasonable measures to prevent a harm from occurring or to reduce its magnitude and to not aggravate the harm if occurred.
From a Consumer Protection Law perspective, Section 40 determines that if the damage to the consumer results from the defect or risk of the thing or the provision of the service, the producer, the manufacturer, the importer, the distributor, the supplier, the seller and whoever has put their trade mark on the thing or service will be jointly liable. The carrier will be liable for damage caused to the thing during the provision of the service. Those who prove that the cause of the damage was alien to them shall be released in whole or in part. This provision is particularly relevant when using AI in the context of consumer relationships.
AI in health has an immense potential and may be used in a thousand different ways to contribute to easier, more accurate, faster, cheaper and better diagnosis and treatment. However, so as not to cause the immediate opposite effect, an accurate training of the algorithm used in the AI shall be done carefully. When speaking about AI, it is not only the quantity of data with which the AI is trained but also the quality of same.
Increasingly, AI is being used in Argentina in the healthcare field mainly as a tool to support diagnosis. Especially since a lack or deficit of a good training could lead to algorithm bias (for example, in an x-ray analytic software not including any ethnic information x-rays) that may lead to a wrong diagnosis, which in health may lead to a wrong treatment – or, in a worst-case scenario, no treatment when needed. Therefore, accuracy in the training of the AI and reducing algorithm bias to its fullest possible is vital if using AI in health.
In Argentina there is no current specific regulation that applies to AI in heath. Therefore, the dispositions issued by (i) the National Administration of Medicine, Food and Medical Devices (Administración Nacional de Medicamentos, Alimentos y Tecnología Médica,ANMAT) as the enforcement authority in all activities related to medicine, food, cosmetic and medical devices, and (ii) the Ministry of Health would be applicable to the AI in its use in health.
When used in health, the AI may be considered a medical device. The ANMAT definition of a medical device comprises articles or systems of use or applications for prevention, diagnosis, treatment, rehabilitation or contraception, that does not use any pharmacological, immunological or metabolic means to achieve its main purpose in human beings, but which may be assisted in its function by such means.
Thus, software such as AI could be fit into the medical device category. Moreover, most countries consider that a software is a medical device when it is:
Therefore, if the AI has any of the purposes indicated in the medical device definition (ie, if it fits into the medical device category), it would be considered software as a medical device (SaMD).
For a SaMD to be commercialised in Argentina, the Argentine regulation requires the manufacturer to (i) obtain ANMAT prior authorisation as manufacturer or importer, depending on the case, of medical devices, and (ii) get the SaMD registered, prior to starting commercialisation.
Depending on the type of risk that the SaMD represents, as any other medical device, it could be classified as Class I, II, III or IV by the ANMAT and, based on its classification, it shall comply with the specific requirements of each category, which include:
ANMAT has already approved and registered some SaMD with AI that are currently being used in the medical field. However, such AIs are used to facilitate or to assist the physicians in their practice. In Argentina, the use of autonomous AI (ie, AI providing diagnosis or indicating treatments, for example, without the intervention of a physician) is not yet common since it could clash with current regulation whereby only doctors are allowed to practice medicine. The question is: would an autonomous AI be “practicing medicine” in the terms of local regulation? The answer is not yet clear.
Finally, some general rules may apply to the liability of those who use AI. Such is the case of consumer protection rules that may apply to any owner, developer, manufacturer, importer or company using such technology, who will be liable for damages caused by the same. This means that, for example, if a software trained to diagnose a certain disease fails in the diagnosis, it may give rise to a claim for damages against its developer, manufacturer, importer or company using such technology. In addition, the physician who uses the AI would still be responsible since they are the one who has the final word over the treatment or diagnosis; however, such responsibility would be ruled by civil and commercial regulations on civil liability of liberal professions such as medicine.
The National Ministry of Health, through the National Digital Health Network, is working on implementing a Centralised Electronic Health Record throughout the country.
In Argentina there are not strong AI regulations that apply to financial services companies. However, as of today, there are certain regulations regarding the use of AI that apply to financial services companies.
For example, through Decree No 1501/2009, the Federal Executive Branch authorised Renaper to implement digital technologies to identify Argentine citizens and foreign nationals’ identity. Renaper developed different technologies and applications to offer services to authenticate and validate people’s National Identity Card (Documento Nacional de Identidad,DNI) – for example, through fingerprint images, photographs, DNI data and multiple validation systems – implementing for such purposes the Digital Identity System (Sistema de Identidad Digital, SID). According to the SID terms and conditions, its platform provides services of remote identity validation in real time by biometric authentication through facial recognition between a photograph taken from the users’ mobile device and the photograph of the DNI stored in the databases of Renaper.
By means of Resolution No 30-E/2017 (as amended and complemented by Resolution 76/2019), the Financial Information Unit (Unidad de Inteligencia Financiera,UIF) established that within the applicable due diligence for customer identification and “know-your-customer” policy, the financial institutions may carry out the identification process by electronic means (as a substitute for physical presence) by using rigorous biometric techniques or equally rigorous, storable and non-manipulable alternative technological methods, in accordance with certain very specific provisions.
Moreover, the Argentine Central Bank (Banco Central de la República Argentina,BCRA) also issued a similar resolution during 2021, by means of which it established that ATMs must be retrofitted with fingerprint readers so that consumers have the option of validating their identity using biometric data to carry out transactions. The BCRA has established a schedule of interoperability and use of all ATMs by the end of 2022.
On the other hand, regarding financial indebtedness, there are certain regulations issued by the BCRA that establish some applicable specific evaluation methods for the financial institution to decide the credit it may grant to the customer. Those methods are as follows.
Both techniques must be based on the variables that the financial institutions consider relevant to measure the risk of uncollectibility associated with each debtor and type of credit, being able to use the same type of information, and to such purposes, financial institutions use AI mechanisms. The methodology and information used to replace the proof of income through specific documentation must ensure that the evaluation of the repayment capacity is incorporated into the result of the “screening” or “credit scoring” used for the purpose of inferring the credit behaviour (probability of repayment of obligations in the future).
For such purposes, financial entities should only ask for financial information, and may not ask any question that could seem discriminatory and may not be reasonably relevant for the credit scoring process (eg, racial or ethnic origin, political opinions, religious, philosophical, or moral beliefs, trade union membership, data concerning health or sex life, etc). The methodologies must allow estimating the client's income level.
There are no regulations on any level nor draft bills filed in Congress in connection with the regulation on autonomous vehicles. The country's domestic legislation has focused on how the driving activity by a human driver is regulated, leaving out the possibility of it being carried out by a deep learning program. Thus, the Traffic Law No 24,449 and its amendments establish the principles for the use of public roads and the application of rules on people, animals and land vehicles.
Consequently, when it comes to liability for the use of AI in autonomous vehicles we must revert to the Civil and Commercial Code, which includes a rule for damages caused by the circulation of vehicles. Since “vehicles” is such a broad term, it could be considered applicable to autonomous vehicles too.
In this connection, Section 1769 of the Civil and Commercial Code states that Sections 1757 and 1758, referring to liability derived from the intervention of things, apply to damages caused by the circulation of vehicles. Liability is objective, meaning that neither blame nor negligence are relevant, and that the owner or guardian of the autonomous vehicle are concurrently responsible for the damage caused by the risk or defect of the things, as well as for the activities that are risky or dangerous by nature, for the used means or for the circumstances of their execution. The administrative authorisation for the use of the thing or the execution of the activity, or the fulfilment of the prevention techniques are not exonerating.
The guardian is defined as the one who exercises, by themself or through third parties, the use, the direction and the control of the thing, or to the one who benefits from it. In case of a risky or dangerous activity, the person who performs it, uses it or takes advantage of it, by themself or through third parties, is liable.
All in all, the owner and the person in charge of the maintenance of the AI, as the guardian, would be concurrently liable for the damages caused by the autonomous vehicle.
In the automated phases of employee hiring, machine learning is used to match resumes and skills of open positions and to review job applications faster than it would be done manually. If job descriptions are defined, knowing exactly the profile of employee and skills needed, AI can optimise the recruitment process. Among the benefits of using this technology are that the recruiting process can be faster and allow the employer to review more resumes than a person could, also making job searches more efficient.
Concerning the potential risks, we understand that the use of IA could result in discrimination, depending on how the program is set up. For instance, if the algorithm makes decisions based on a biased perspective or rejects candidates with certain protected characteristics, those candidates could be considered discriminated against.
Anti-Discrimination Law No 23,592 states that whoever arbitrarily impedes, hinders, restricts, or somehow diminishes the exercise of rights on an equality basis, shall be compelled to leave that discriminatory act without effect, cease in its realisation and repair the material and moral damage caused. The law has a list of protected characteristics, but such list is illustrative and not limiting.
Therefore, a candidate who believes they have been discriminated against during a recruiting process could file a claim based on this law, requesting the prospective employer to repair the damages caused by such act; a candidate could even request that the decision be left without effect, which would in practice mean that the employer should hire that candidate. In such a case, the employer should be in a position to produce evidence that, in fact, the decision not to hire that person had nothing to do with the protected characteristic invoked, but that it was based on other aspects (such as another candidate having more experience or more skills for the position).
AI is comprised by two main components, software and datasets which are then processed to produce human-like decision making or performance of tasks. To assess the impact of intellectual property on AI, one must look into the general IP legal framework.
Invention patents are governed by the Patent Law, its Implementing Decree No 260/96 and the Guidelines on Patenting issued by the Patent Office. Invention patents are exclusive property rights that the law grants to inventors who provide a new, non-obvious, industrial and lawful solution to a technical problem. On the other hand, the general regime of copyright law is governed by the Copyright Law applicable to works of authorship including software and databases.
Without prejudice to the possibility of protecting AI systems under copyright or patent systems, currently one of the problems that arise is related to the possibility of determining authorship of works or inventions created by AI. Both the Patent Law and the Copyright Law refer to inventors and authors as human persons – ie, the quality of inventor or creator is reserved for natural persons.
The Copyright Law states that the holders of the intellectual property rights are:
In other words, the computer program or software will always have a human authorship, ultimately.
In turn, the Patent Law defines "invention" in its Section 4 as "all human creation". On the other hand, Section 8 stipulates that the right to a patent shall belong to the inventor or their successors in title. Likewise, Section 9 provides that, unless proven otherwise, the natural person or persons designated as such in the patent application shall be presumed to be the inventor. This Section defines the issue by expressly restricting the invention to natural persons. There would be no possible interpretation in order to include machines or systems as inventors.
Neither the Patent Law nor the Copyright Law mention, or could it be construed from them, that an AI system could be qualified as the author of intellectual property rights. They only refer to natural or legal entities.
There are no current bills being addressed by the Congress regarding AI protection under copyright or patent law.
The judicial procedural codes in Argentina do not provide for a discovery mechanism (or for mechanisms assimilable to discovery) during the probatory stage. However, discovery is often used in the context of arbitration procedures (primarily in international arbitration).
E-discovery is a form of discovery in which the process of identification, collection and production of evidence to comply with a request for information in a lawsuit or investigation is done electronically. The volume of electronic data that is typically delivered during e-discovery creates a fertile ground for the application of AI solutions, as manually reviewing the information may be inefficient in terms of time and cost.
Tools that use AI rapidly process information to provide a high-level understanding of the datasets, including the identification and correlation of key events, concepts, individuals and timeframes of the dispute.
The implementation of AI may have a huge impact on a company’s competitiveness but is also a known risk that may lead to liability, including personal liability. The Board of Directors must be prepared to mitigate and avoid the occurrences of damages that AI presents when carrying out a decision-making process considering, among others, the social, labour and environmental factors. The Board of Directors should take special care in the undetected bias that may be pre-programmed in the AI and have a direct effect in its decision-making process.
In this connection, the Board should be provided with a basic understanding of AI to help them better decide the necessary safeguards that should be in place to implement and monitor the AI. Also, the implementation of an AI governance programme to detect potential risks and effectively monitor and handle its implementation and address its problems is very relevant.
A programme would help the Board to follow the development of AI in all its stages (pre-design, design, development and deployment), document the findings and conduct routine auditing, which will also be useful to keep records of intents to mitigate bias and harms of AI systems and demonstrate accountability before authorities.
Cybersecurity
Cybersecurity has become one of the biggest issues in recent years. In turn, networks have become more complex, increasing the number of points of failure that can be an object of attack by hackers. Identifying these points of failure is quite complex, and AI can be of great help in analysing network traffic and learning to recognise patterns that suggest potential intrusions or flaws.
The Metaverse
AI plays a fundamental role in the emerging metaverse as a digital world that enables immersive experiences, not only from a product perspective, but also by making the metaverse more inclusive. From a product perspective, AI will be key in creating the digital environment in which social interactions will occur. While the metaverse promises highly immersive experiences, this can be a barrier for people with low digital or other capabilities. AI for accessibility (such as automatic translation or image recognition for people with visual disabilities) could guarantee access to everyone. These are just a few examples of how AI can contribute to the metaverse.
5G Networks
By the end of 2021, the National Communications Agency (Ente Nacional de Comunicaciones,Enacom) defined the spectrum frequencies that will be used for 5G networks; however, it is not yet known when the spectrum will be bid for, or under what conditions. Spectrum bidding for 5G networks is expected due to its main characteristics (high download speed, low latency and multiplicity of connected devices at the same time) that would allow the acceleration and strengthening of connectivity services, implementation of AI (IoT), automation of processes and others.
Works of Art and Workforce Tools
Other trends regarding AI applications include (i) creative AI for the development of works of art, software, designs, etc, and (ii) the use of AI to augment the workforce by using smart tools that help workers do their jobs more efficiently.
Tucumán 1, Piso 3
(C1049AAA)
Ciudad Autónoma de
Buenos Aires,
Argentina
+54 11 4379 6830
frosati@beccarvarela.com https://beccarvarela.com/