Artificial Intelligence 2022

Last Updated May 10, 2022

Japan

Law and Practice

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The firm's TMT practice group is comprised of about 50 lawyers and legal professionals and represents Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. A strength of the firm's TMT practice group is that, in view of its robust client base, it is well-positioned to consistently meet requests from clients to provide a range of advice, from business strategies to daily compliance and corporate matters.

Under Japanese law, the generally applicable laws relating to AI liability are the Civil Code (ie, tort liability) and the Product Liability Act.

Civil Code (Tort Liability)

Under the Civil Code of Japan, a person who wilfully or negligently infringes the rights or legally protected interests of another is liable in tort for damages arising out of or in connection with such infringement (Article 709). In this context, the term “negligence” refers to the failure to take the necessary measures to avoid the occurrence of a specific result, although the occurrence of such a result was foreseeable. For example, if users cause an unexpected result through the use of AI that causes a third party to incur damage, they can be held liable in tort for their “negligence”. AI developers and manufacturers can also be held liable in tort.

However, whether AI users, developers, or manufacturers can be considered to have “foreseen” the occurrence of such a result or “taken necessary measures to avoid” it then it will be determined based on the specific circumstances of the case, including the functions and risks of the AI.

The Product Liability Act

Under the Product Liability Act of Japan, the manufacturer of a “defective product” that “infringes the life, body, or property of another” is liable for damages, regardless of whether the manufacturer was negligent (Article 3).

Although an AI program or software itself does not constitute a “product”, if the AI is installed in a particular device, the entire device, including the AI, constitutes a “product”. The term “defect” under the Act refers to a lack of “safety that the product ordinarily should provide”. However, the issue of determining how an AI “ordinarily should provide safety” and how a plaintiff (victim) can prove that the product lacks such safety is extremely problematic.

It should be noted that even if an AI is found to be “defective”, the manufacturer of the AI device is exempted from liability for damages if it can be established that the manufacturer could not have detected such defect in the AI based on its scientific or technical knowledge at the time the manufacturer delivered the AI device (development risk defence) (Article 4, item 1).

AI (typically machine learning) is being introduced and utilised in a wide range of industries. For example, the 2020 AI White Paper published by the Information-technology Promotion Agency of Japan, lists the following industries in which AI is used and examples of its application.

  • Manufacturing: product inspection by image analysis, preventive diagnosis of production equipment failures, design support, production planning support.
  • Automotive: automated driving as well as streamlining operations such as vehicle visual inspection and design.
  • Infrastructure: abnormality detection and maintenance work.
  • Agriculture: forecasting crop damage due to disease, crop growth management and harvest timing forecasting, as well as automated crop sorting and harvesting using robots.
  • Health, medicine and nursing care: image diagnosis support, automation of medical consultations and pharmaceutical development.
  • Crime and disaster prevention: detection of suspicious behaviour.
  • Energy: electricity demand forecasting, operational efficiency improvement.
  • Education: adaptive learning, scoring systems to evaluate pronunciation accuracy.
  • Finance: fraud detection, investment and investment management, loan screening, customer service using chatbots.
  • Shipping: optimisation of shipping and sorting operations.
  • Logistics: store marketing, demand forecasting, inventory management, AI-based cameras for unstaffed stores.
  • Government: automation of administrative work.

The impact of COVID-19 on companies' efforts to use AI has been both positive, with increased use of AI due to accelerated digitisation, and negative, with AI use delayed due to poor business performance. For example, according to a survey conducted by PwC Consulting LLC in December 2020, 32% of Japanese companies responded “AI use has accelerated”, while 27% stated “AI use has been delayed”, with respect to the impact of COVID-19 in 2020.

One legal issue that is more likely to arise as a result of COVID-19 is the issue of monitoring of employees by AI tools due to the increase of remote work. Monitoring of employees may constitute a violation of their privacy rights if it lacks a reasonable need or if it is done by unreasonable means. Questions/answers 5–7 published by the Personal Information Protection Commission state that companies should consider the following points when monitoring employees:

  • specifying the purpose of monitoring in advance in internal rules and clearly indicating the purpose to employees;
  • designating a person to be responsible for monitoring and establishing their authority;
  • establishing rules for monitoring in advance and thoroughly communicating their content to persons implementing the monitoring system;
  • verifying that monitoring is being conducted properly in accordance with the pre-determined rules.

In Japan, under its policy that cross-sectoral binding rules and regulations on AI are currently unnecessary, the government has issued principles on AI and guidance for implementing such AI principles to encourage companies to formulate their own independent rules. The government has also published its strategy on AI as part of its national strategy, which is updated approximately every year.

The Social Principles of Human-centric AI, issued by the Cabinet Office as Japan’s AI Principles, comprise the following seven principles:

  • human-centric principle;
  • principle of education and literacy;
  • principle of privacy protection;
  • principle of ensuring security;
  • principle of fair competition;
  • principles of fairness, accountability and transparency;
  • principle of innovation.

Based on the AI Principles above, each government agency has issued the following non-legally binding guidelines, handbooks, or other materials as guidance for implementing the AI Principles, for entities that develop AI or provide AI-based services:

  • Ministry of Internal Affairs and Communications (Conference toward AI Network Society): AI Utilization Guidelines and Draft AI R&D Guidelines for International Discussion;
  • Ministry of Economy, Trade and Industry (METI): Contract Guidelines on Utilization of AI and Data Version 1.1, Governance Guidelines for Implementation of AI Principles Ver. 1.1, Guidebook for Utilization of Camera Images Ver. 3.0, and AI Governance in Japan Ver. 1.1.

In addition, guidance for users of consumer AI-related services includes: Consumer Affairs Agency: AI Utilization Handbook.

The Cabinet Office issues national strategies for AI, the latest of which is AI Strategy 2022, issued on 22 April 2022. The latest version has three principles: “respect for humanity”, “diversity”, “sustainability” and, with their implementation in mind, five strategic objectives (human resources, industrial competitiveness, technological systems, international and addressing imminent crises) have been adopted.

The AI Strategy 2022, issued on 22 April 2022, indicates that “various initiatives are being considered for key technologies, including AI, in the interest of economic security. Therefore, co-ordination of related measures and synergies with strategic initiatives such as quantum and biotechnology should be sought to achieve effective government-wide co-ordination of related measures." It is noteworthy that security is mentioned for the first time in a national strategy on AI. There are no other explicit references to national security in the AI principles and guidance listed in 3.1 Policies.

On 11 May 2022, the Japanese Parliament enacted the Act on the Promotion of Security by Taking Economic Measures in an Integrated Manner (the “Economic Security Promotion Act”). The Act contains provisions on the promotion of public-private co-operation in AI technology in the interest of economic security. In particular, the Act stipulates that the Japanese government will:

  • designate as “specified critical technologies” those “advanced technologies that may pose a threat to national security and public safety due to improper use of research and development information by outside parties or interference by such technologies by outside parties”; and
  • provide the necessary information and financial support for the research and development, among other technologies, of specified critical technologies.

The specified critical technologies envisioned here include technologies that contain AI-related critical technologies.

In relation to supply chain, the AI Governance in Japan Ver. 1.1 states, “the future approach should be to assist in ensuring public trust in AI systems throughout the supply chain through interim guidelines".

There is currently no cross-sectional legislation in the area of AI. However, there are relevant rules in individual legal areas that presuppose the use of AI. For example the Road Traffic Act has established certain rules to ensure that AI-based automated driving (level 3) is safe. The Pharmaceutical and Medical Device Act also establishes a “prior notification system for confirmation of plans for change regarding medical devices and changes implemented according to the plans for medical devices” (commonly known as IDATEN – the Improvement Design within Approval for Timely Evaluation and Notice) for AI-based medical device programs, which aim to provide flexibility for medical devices that are expected to be continuously improved, such as AI medical devices.

The Ministry of Economy, Trade and Industry (METI) in its report Japan’s Approach to AI Governance states that a cross-sectional mandatory regulation for AI systems is unnecessary at this stage, and that even if a cross-sectional mandatory regulation is discussed in the future, a risk assessment should be conducted considering not only risks but also potential benefits, and the possibility that certain risks may be eliminated through technological development should be taken into account.

On the other hand, legislative amendments are being planned for individual legal areas. For example, the new law to partially amend the Road Traffic Law has been passed by the Diet, which includes the establishment of a permit system for AI-based automated driving without a driver (see also 9. AI in Industry Sectors).

Although the Cabinet Office has formulated a national strategy for AI, there are no cross-sectional and binding laws and regulations for AI in Japan (see 4.1 Enacted Legislation and 4.2 Proposed Legislation). Therefore, there is no regulatory authority that plays a leading role in regulating AI. Instead, the following ministries and agencies are primarily responsible for the enforcement of AI-related laws by sector and application within the scope of the laws and regulations under their jurisdiction:

In relation to AI, the Ministry of Health, Labour and Welfare (MHLW) has jurisdiction over labour laws (ie, the Labour Standards Act, Labour Contract Act, Employment Security Act, among others) and the Pharmaceutical and Medical Devices Act (PMDA). In connection with labour laws, the MHLW addresses AI-related employment issues, such as recruitment, personnel evaluation and monitoring of employees using AI (see 10.1 AI in Corporate Employment and Hiring Practices). In connection with the medical devices field, there is a move to accommodate AI-enabled medical devices under the PMDA (see 4.1 Enacted Legislation and 9.1 Healthcare).

The Ministry of Land, Infrastructure, Transport and Tourism (MLIT) is responsible for implementing and enforcing the Road Traffic Act, which establishes rules for automated driving (see 9.3 Autonomous Vehicles).

The Ministry of Economy, Trade and Industry (METI) has jurisdiction over various AI-related laws and regulations (such as the Unfair Competition Prevention Act, which protects big data as “limited provision data”) and, as indicated in 3.1 Policies, is actively formulating guidelines and other relevant materials to implement AI principles, such as Contract Guidelines on Utilization of AI and Data Version 1.1. In addition, the Japan Patent Office, an external bureau of METI, implements and enforces the Patent Act (see 11.1 Applicability of Copyright and Patent Law for protection of AI-enabled technologies and datasets under the Patent Act).

The Personal Information Protection Commission (PPC) is the regulatory authority responsible for implementing and enforcing the Act on the Protection of Personal Information (APPI). The PPC addresses APPI-related issues where personal data is involved in the development and use of AI – eg, where personal data is included in a dataset for AI training or profiling (see 5.3 Regulatory Objectives).

The Japanese Fair Trade Commission (JFTC) is the regulatory authority responsible for implementing and enforcing the Act on Prohibition of Private Monopolization and Maintenance of Fair Trade (the Anti-Monopoly Act) and the Subcontract Act. The JFTC addresses issues that the use of AI, including AI and algorithmic price adjustment behaviour and dynamic pricing, may have on a fair competitive environment.

The Financial Services Agency (FSA) has jurisdiction over the Banking Act and the Financial Instruments and Exchange Act, among others. The FSA addresses risks and other issues related to investment decisions by AI for financial instrument business operators (see 9.2 Financial Services).

The Agency for Cultural Affairs has jurisdiction over the Copyright Act. See 11.1 Applicability of Copyright and Patent Law for protection of AI-enabled technologies and datasets under the Copyright Act.

The Ministry of Internal Affairs and Communications (MIC) addresses the policy related to information and communication technologies (including the policy related to advancement of network system with AI as a component).

The definitions of AI used by regulators include some specific to machine learning and others more broadly, and the Japanese government has not established any fixed definition. The main examples are as follows:

  • The Basic Act on the Advancement of Public and Private Sector Data Utilization – “AI-related technology” means technology related to the realisation of intelligent functions such as learning, reasoning and decision-making by artificial means, and the use of such functions realised by artificial means;
  • AI Utilization Guidelines – “AI” means “the general concept of AI software and AI systems”.

“AI software" refers to software capable of adapting its output and programs through a process of utilisation by, among others, learning from data, information and knowledge. For example, machine learning software falls under this category.

“AI systems” refer to systems that contain AI software as a component. For example, this includes robots and cloud-based systems that implement AI software.

The Ministry of Health, Labour, and Welfare (MHLW), through its enforcement of the Labour Act, addresses issues related to the utilisation of AI in various aspects of employment, including recruitment, personnel evaluation, employee monitoring and AI replacement and termination/reassignment issues (see 10.1 AI in Corporate Employment and Hiring Practices). Steps are also being taken to address AI-based medical devices under the PMDA, such as providing a framework for determining whether an AI-based medical device program constitutes a “medical device” subject to licensing (see 4.1 Enacted Legislation and 9.1 Healthcare).

The Ministry of Land, Infrastructure, Transport and Tourism (MLIT) handles the development of laws on traffic rules for automated driving through the enforcement of the Road Traffic Act (see 9.3 Autonomous Vehicles).

The Ministry of Economy, Trade and Industry (METI) addresses the protection of data and information used in AI development and products created in the process of AI development under the Unfair Competition Prevention Act (see 11.1 Applicability of Copyright and Patent Law).

See 9.2 Financial Services for a discussion on the amended Installment Sales Act, which came into effect in April 2021, enabling credit card companies to determine credit limits through credit screening using AI and big data analysis.

The PPC, through its enforcement of the APPI, addresses the handling of personal information that may be used in the development and utilisation of AI. For example, if AI is used for targeted advertising, this must be clearly stated in the purpose of use in the interest of user protection. In recent years, PPC has also noted the issue of monitoring by AI tools for employees as a result of the promotion of remote work due to the impact of COVID-19 (see 2.1 AI Technology and Applications).

The JFTC addresses issues related to the use of AI in a fair competitive environment through enforcement of the Anti-Monopoly Act. The Report of the Study Group on Competition Policy in Digital Markets, released by the JFTC on 31 March 2021, outlines the JFTC’s views on issues involving AI/algorithm-based pricing and price research, and personalised pricing. However, the JFTC is currently not actively enforcing violations.

The matter is not applicable in this jurisdiction.

The matter is not applicable in this jurisdiction.

Current standards for AI quality include the Japanese Industrial Standards (JIS) established by the Ministry of Economy, Trade and Industry (METI), specifically JISX 0028 and JISX 0031. These are essentially Japanese translations of the ISO international standards and there is no substantial difference in content. These two standards define the basic concepts of AI, expert systems and machine learning; however, these standards are somewhat out of date, having been established in 1999 without any amendments to date. Thus, it is difficult to say that these standards are appropriate for AI today, which has become more complex and made significant progress since 1999.

On the other hand, Japan is actively involved in the international standards for AI, which are currently being actively discussed. For example, the Information Processing Society of Japan (IPSJ) has established the SC42 Technical Committee within its Information Standards Committee to gather domestic opinions and to respond to international issues. In addition, it seems to be deepening its co-operative relationship with CEN/CENELEC, an EU standardisation body.

Although not a national standard, the Consortium for AI Product Quality Assurance, consisting of major domestic IT companies, academics and the National Research and Development Agency, has published the AI Product Quality Assurance Guidelines. The guidelines list five quality evaluation areas (data integrity, model robustness, system quality, process agility and customer expectation) as well as specific checklists for each product. It is believed that these can be useful in product development.

In addition, the National Institute of Advanced Industrial Science and Technology (AIST) has published the Machine Learning Quality Manual Management Guidelines. These guidelines classify quality for machine learning systems into three categories: quality at the time of use (quality that should be provided to the final user of the system as a whole), external quality (quality from an objective perspective that is required of components of the system), and internal quality (quality that is measured specifically when creating the components or evaluated through development activities such as design – ie, quality that is a characteristic inherent to the components). The guidelines then establish anticipated quality levels for external and internal quality according to their characteristics, and propose how to use quality control according to the quality level.

There are currently no critical issues related to the standard-essential patents related to AI or their licensing.

Algorithmic bias refers to situations in which a bias occurs in the output of an algorithm, resulting in unfair or discriminatory decisions. In Japan, there has not been a case in which a company has been found legally liable for illegality arising from algorithmic bias. However, if a company were to make a biased decision based on the use of AI, it could be found liable for damages based on tort or other grounds. In addition, companies may face reputational risk if unfair or discriminatory decisions are made in relation to gender or other matters that significantly affect a person’s life, such as the hiring process.

There are no laws or regulations that directly address algorithmic bias. Companies are expected to take initiatives to prevent the occurrence of algorithmic bias. For example, the AI Utilization Guidelines (August 2019) issued by the Conference toward AI Network Society established by the Ministry of Internal Affairs and Communications (MIC) provides, as one of the ten principles of AI utilisation, the principle of fairness (principle 8), which states that “AI service providers, business users, and data providers should be aware of the possibility of bias in the decision-making process of AI systems or AI services, and should be mindful that individuals and groups are not unfairly discriminated against based on the decisions of AI systems or AI services”. In addition, the Guidelines for the Quality Assurance of AI Systems (September 2021) and the Machine Learning Quality Management Guideline, Second Edition (July 2021) provide tips for avoiding or mitigating algorithmic bias, which may be useful in practice.

Given that all processes involved in data generation and selection, annotation, pre-processing, and model/algorithm generation are subject to potential bias, documentation regarding the specifics of these processes should be obtained and maintained. However, when using complex algorithms such as deep learning, it may not be possible for humans to understand the above-mentioned process, even if collecting the material in relation to such process, in the first place. Therefore, it is advisable to select algorithms that can be used by taking into account aspects of "explainable AI" (XAI).

Personal Data

Facial or biometric authentication requires the capture of biometric data such as facial images and fingerprint data. Such data is considered personal information under Japan’s Act on the Protection of Personal Information (APPI), but is not regarded as special care-required information (Article 2, paragraph 3 of the Act). Therefore, when acquiring such information, as long as its purpose of use is notified or disclosed, the individual’s consent is not required. However, depending on how the data is acquired and used, it may constitute an improper acquisition (Article 20, paragraph 1 of the Act) or improper use (Article 19 of the Act). It is therefore advisable to consider this issue carefully.

Privacy and Portrait Rights

In addition, depending on how facial images and biometric information are obtained and used, there may also be infringement of privacy rights and portrait rights (ie, infringement of personality rights). Although the debate over the circumstances in which an infringement of privacy and portrait rights occurs has intensified with a growing number of court precedents, since the debate surrounding facial and biometric authentication has not yet crystallised, it is difficult to definitively specify what type of acquisition and use would be permissible. With respect to the use of video images, in practice, it is advisable to refer to the Guidebook for Utilization of Camera Images Ver. 3.0 (March 2022).

Corporate Risk

If the personal identification function makes an incorrect decision during facial or biometric authentication, it is likely that the user cannot use the device (ie, a false negative), or someone who is not the user can use the device (ie, a false positive), among other issues. In all such cases, the service provider’s liability for damages may become an issue, but, generally, the terms of use or other policies and guidelines provide that the service provider is exempt from liability. Whether or not such disclaimer is valid is determined in light of the Consumer Contract Act in cases of B-to-C transactions.

In July 2021, JR East, Japan’s largest rail operator, introduced a security system featuring facial recognition to detect “those who have committed serious offences and served prison sentences in the past in JR East facilities”, “wanted suspects” and “loiterers or other suspicious persons”. However, following severe public criticism in relation to detecting those released from prison and parolees, it was decided not to include them within the scope of detection. Therefore, social acceptance is also an important factor in the use of facial and biometric recognition, and there is a risk of reputation damage if an incorrect decision is made.

In Japan, there are no laws or regulations that provide specific rules for AI transparency and accountability. However, the AI Utilization Guidelines (August 2019) issued by the Conference toward AI Network Society established by the MIC lists “the principle of transparency” and “the principle of accountability” as two of the ten principles of AI utilisation. In the interests of the former, it would be advisable to record and keep AI input and output logs, among others, and ensure accountability. In contrast, in the interests of the latter, it would be advisable to provide information on AI and notify or disclose to public its utilisation policies. However, there is no clear guidance on when and what information should be disclosed when AI, such as chatbots, replaces services typically provided by people.

The above can also be problematic from the standpoint of the APPI. For example, if AI is actually being used, but the company does not disclose this, leading the user to mistakenly believe that a human is making decisions and providing personal data, there may be a breach of the duty to properly acquire the data or the duty to notify the purpose of its utilisation.

Profiling will be used as an example of automated decision-making. While some foreign countries have introduced regulations on profiling using AI, such as Article 22 of the EU's GDPR, there are no laws or regulations that directly regulate profiling in Japan. Notwithstanding this, however, the provisions of the APPI must be complied with. For example, when personal data is acquired for profiling purposes to analyse behaviour, interests and other information from data obtained from individuals, the purpose of utilisation of such data must be explicitly notified or disclosed to public in accordance with the APPI. However, it should be noted that individuals’ consent is not required under the APPI, unless acquiring special care-required information. In addition, precautions should be taken to avoid inappropriate use (Article 19 of the APPI).

Further, if automated decision-making leads to unfair or discriminatory decisions, liability for damages and reputational risk could be an issue, similar to the issues discussed in 8.1 Algorithmic Bias.

In Japan, non-human entities (ie, entities other than natural persons and legal entities) do not have the legal capacity to act, and it is unlikely that AI-enabled technologies will be held liable or responsible for their actions.

Under Japanese civil law, the developer or operator of an AI-enabled technology may be held liable in contract, tort, or subject to product liability, among others, if such AI-enabled technology causes personal injury or damage or loss. Moreover, in relation to personal injury, developers or operators of such AI-enabled technology could be charged with manslaughter (Articles 209 and 210 of the Criminal Code) or professional negligence resulting in injury or death (Article 211 of the Criminal Code).

Furthermore, if a third party commits an act that causes AI-enabled technology to make an incorrect decision (eg, intentionally entering incorrect training data to cause an incorrect decision), issues concerning joint torts in civil cases and those concerning complicity in criminal cases may arise in relation to assigning liability.

If AI-based programs, such as diagnostic imaging software or health management wearable terminals, or devices equipped with such programs fall under the category of "medical devices" under the Pharmaceuticals and Medical Devices Act, approval is required for their manufacture, and sale and approval or certification is also required for individual medical device products. Whether AI-based diagnostic support software and other medical programs constitute “medical devices” must be determined on a case-by-case basis, but the Ministry of Health, Labour and Welfare (MHLW) has provided a basic framework for making such determinations.

According to this framework, the following two points should be considered.

  • How much does the programmed medical device contribute to the treatment, diagnosis, etc, of diseases in view of the importance of the results obtained from the programmed medical device?
  • What is the overall risk, including the risk of affecting human life and health in the event of impairment, etc, of the functions of the programmed medical device?

In addition, when a change procedure is required to change a part of the approved or certified content of a medical device, the product design for an AI-based medical device may be based on the assumption that its performance will constantly change as new data are obtained after the product is marketed are incorporated. Given the characteristics of AI-based programs, which are subject to constant changes in performance and other aspects after their initial approval, the amended Pharmaceuticals and Medical Devices Act, which came into effect in September 2020, introduces a medical device approval review system that allows for continuous improvement.

Since medical services such as diagnosis and treatment may only be performed by physicians, programs that provide AI-based diagnostic and treatment support may only serve as a tool to assist physicians in diagnosis and treatment, and physicians will be responsible for making the final decision.

Medical history, physical and mental ailments, and results of medical examinations conducted by physicians are considered "personal information requiring special care", under the APPI, and, in principle, the consent of the patient must be obtained when obtaining such information. In many cases, medical institutions are required to provide personal data to medical device manufacturers for the development and verification of AI medical devices. In principle, the provision of personal information to a third party requires the consent of the individual, but it may be difficult to obtain prior consent from the patient. An opt-out system is also in place. However, it cannot be used for special care-required information.

Anonymised information, which is irreversibly processed so that a specific individual cannot be identified from the personal information, can be freely provided to a third party. However, it has been noted that it is practically difficult for medical institutions to create anonymised information. In addition, the Next Generation Medical Infrastructure Act allows authorised business operators to receive medical information from medical information handlers (hospitals, etc) and anonymise it through an opt-out method. However, it is not widely used. The possibility of using “pseudonymised information" introduced under the amended APPI, which came into effect on 1 April 2022, is also being discussed.

In the financial sector, AI is used by banks and lenders for credit decisions and by investment firms for investment decisions. In addition, the amended Instalment Sales Act, which came into effect in April 2021, enables credit card companies to determine credit limits through credit screening using AI and big data analysis.

The FSA's supervisory guidelines require banks, etc, when concluding a loan contract, to be prepared to explain the objective rationale for concluding a loan contract based on the customer's financial situation in relation to the provisions of the loan contract. This is true even if AI is used for credit operations. Therefore, it is necessary to be able to explain the rationality of credit decisions made by AI.

In addition, when credit scoring is used by AI to determine the loan amount available for personal loans, care should be taken to avoid discriminatory judgements, such as different judgements of loan amounts available based on gender or other factors. The Principles for a Human-Centered AI Society also state: "Under the AI design philosophy, all people must be treated fairly, without undue discrimination on the basis of their race, gender, nationality, age, political beliefs, religion, or other factors related to diversity of backgrounds".

Financial instruments firms must not fail to protect investors by conducting inappropriate solicitation in light of the customer's knowledge, experience, financial situation, and the purpose of concluding the contract (the compliance principle). In addition, these firms are obligated to explain to customers the outline of the contract and the risks of investment in accordance with the compliance principle. Therefore, if the criteria for investment decisions by AI cannot be reasonably explained, problems may arise in relation to the compliance principle and the duty to explain.

For autonomous driving at level 3 – where the system performs all dynamic driving tasks in the ODD (operational design domain), but if it is difficult to continue operation, a response to the system's request for intervention is required – the traffic rule legislation is already in place. In other words, the current Road Traffic Law, based on the premise that operation using a level 3 "autonomous operation device" is also included in "driving", states that the driver does not have to constantly monitor traffic conditions or operate the vehicle themself within the ODD during autonomous driving.

On the other hand, there are no specific laws and regulations regarding liability for accidents caused by autonomous driving and, as with ordinary automobiles, the person who uses the vehicle for operation is liable. In the event of an accident caused by software defects, etc, the automobile manufacturer may be liable under the Product Liability Act, and though the software developer is not regarded as the subject of liability under the Product Liability Act, the developer may separately be liable in tort.

As for level 4 autonomous driving – defined by the Society of Automotive Engineers (SAE) as circumstances in which the system performs all dynamic driving tasks and handles responses in the ODD when it is difficult to continue operation – a corresponding amendment to the Road Traffic Law was passed by the Diet in March 2022. The amendment defines "specified autonomous operation" as autonomous operation without a driver, and requires that permission be obtained from the Public Safety Commission with jurisdiction over the location where the specified autonomous operation is to be conducted.

Examples of the use of AI in situations where employees are hired include services that determine a candidate's ability based on various data (resumes, answers to company’s questions, information available on the internet such as social media) and assist companies in their hiring activities, services where AI directly conducts interviews, and services where AI analyses the findings from interviews conducted by humans.

Advantages for employers using AI include the fact that, unlike the subjective evaluations conducted by recruiters in the past, AI-based evaluations can be conducted fairly and objectively by setting certain standards, and that the use of AI can make the recruitment process more efficient. On the other hand, there are risks for employers who use AI in their hiring practices. Generally, since companies have the freedom to hire, even if an AI analysis is incorrect and the employer does not fully verify this analysis, this would not necessarily constitute a violation of applicable laws. However, it can be said that AI-based recruitment limits a company's freedom to hire to a certain extent.

Specifically, even in cases where AI is utilised in recruitment activities and information on job-seekers is automatically obtained, in accordance with Article 5-4 of the Employment Security Act and Article 4-1 (2) of the Employment Security Act Guidelines, the information must be collected in a lawful and fair manner such as directly from the job-seeker or from a person other than the job seeker with the consent of the job-seeker. In addition, when using AI to obtain information on job-seekers, companies must be careful not to obtain certain prohibited information.

Specifically, under Article 20 of the Personal Information Protection Act, the company is typically prohibited from obtaining information requiring special care (race, creed, social status, medical history, criminal record and any facts related to the job-seeker being a victim of a crime), and, under Article 5-4 of the Employment Security Act and Article 4-1(1) of the Employment Security Act Guidelines, the company may not obtain certain information (eg, membership in labour union, place of birth) even with the consent of the job seeker.

In addition, there is a risk that as a result of an erroneously high AI evaluation of a job-seeker, an offer may be made to a job-seeker or the job-seeker may be hired even though the job-seeker would not have been given an offer or hired if the company’s original criteria were followed. In such case, the legality and validity of a decision to reject or dismiss the job-seeker will be determined based on how the recruitment process was conducted.

Having said that, it is likely difficult to dismiss an employee for the sole reason that the AI-based evaluation was incorrect. On the other hand, if a job-seeker is mistakenly given a low AI evaluation and is not hired, the possibility of this constituting a violation of applicable law is likely not high, even though the job-seeker is subject to de facto disadvantageous treatment.

Other risks could also arise, such as reputational risks caused by negative publicity regarding a company's recruiting activities as a result of a series of erroneous AI evaluations.

Compared to cases where all hiring decisions are left to the automatic judgement of AI, it may be easier to ensure the company’s freedom in hiring if the final decision is made by a human and AI is used only as a support tool. However, careful consideration should be given to the use of AI in recruitment activities, for example, to ensure that there is no bias in the algorithm.

Eligibility of AI-enabled Technologies and Datasets for Protection under Copyright and Patent Law

Copyright Act

As long as an AI-enabled technology or dataset creatively expresses thoughts or sentiments, they may also be protected as a copyrighted work of a computer program or a work of a database. However, a work created autonomously by AI is not protected as a copyrighted work because AI has no capacity for independent thoughts or sentiments.

Patent Act

AI-enabled technologies may also be patented, as long as the general patent requirements are satisfied. Under Japanese law, data and learned models are considered eligible for protection as long as they are programs or program equivalents (data with structure and data structure), while data or datasets that are a mere presentations of information are not eligible for protection as patents.

Scope of Protection of AI-based Inventions and Datasets by Intellectual Property Rights

Copyright Act

Even if an invention or dataset can be protected as a copyrighted work (see 'Copyright Act', above), in certain cases, a third party’s exploitation of a copyrighted work for purposes other than the enjoyment of thoughts or sentiments expressed in the work, such as the use for data analysis, may be exempted in certain cases by Article 30-4 of the Copyright Act. Therefore, the use of a third party’s copyrighted work for AI training does not constitute copyright infringement. However, if such exploitation of the work is intended to create a new copyrighted database, such exploitation is not considered to be exempted by Article 30-4 even in the case of the above exploitation (a provisory clause of Article 30-4).

Copyright infringement is established when a person relies on and uses another person's copyrighted work; however, currently, it is controversial whether this requirement of reliance is satisfied when AI uses another person's copyrighted work autonomously. There is no established view on the issue.

Patent Act

The Japan Patent Office (JPO) has published explanatory materials on the examination criteria for obtaining a patent in order to increase the number of the examples of operation for patent examination of the patent applications for AI-related technologies given the criteria. The said examination criteria set forth the JPO's decision-making criteria regarding whether an invention has a cause for invalidation, but they are not legally binding.

Unfair Competition Prevention Act

As long as the big data used in the development and utilisation of AI meets the three requirements that it is confidential, non-public and useful, it is protected as a "trade secret" (Article 2 (6) of the Unfair Competition Prevention Act (the UCPA)), and a third party may be enjoined from unauthorised use and the affected party may claim damages for unauthorised use. Criminal penalties are also provided for unfair competitive acts regarding a trade secret conducted with the intent of harming others (Article 21 of the UCPA).

In addition, even if data does not meet the above requirements of being confidentially managed and does not constitute a “trade secret” because it is intended to be provided to a third party in the process of developing AI, technical or business information that is stored and managed in a considerable amount electromagnetically as information to be provided to a specific person is protected as “limited provision data" (Article 2 (7) of the UCPA), and a third party may be enjoined from unauthorised use and the affected party may claim damages for unauthorised use. However, there are currently no criminal penalties for limited provision data.

Other

Even if not protected as described above, the unauthorised use may constitute an unlawful act under Article 709 of the Civil Code only in cases where special circumstances such as infringement of legally protected interests different from those described above are found (Supreme Court, Judgement, 8 December 2011, 65 Minshu (9) 3275 [2012]).

Can AI be the Inventor of a Patented Invention or the Author of a Copyrighted Work?

Since AI is not a natural person, it is considered neither an inventor under the Patent Act nor an author under the Copyright Act. However, for works created using AI as a tool, the natural person who contributed to the creation is considered the author, not AI.

While it is controversial whether AI should be given judicial personality, such a legal system is not being considered at this point.

Since discovery and e-discovery systems have not been adopted in Japanese litigation proceedings, the use of AI for evidence collection and disclosure in litigation has not progressed. AI services provided in Japan to support civil litigation activities include AI-based brief preparation support services and AI-based private online dispute resolution (ODR) platform services.

The Basic Policy on the Promotion of ODR was published by the Ministry of Justice in March 2022 and the goal is to implement ODR across Japanese society by 2027 so that everyone can receive effective assistance in resolving disputes anytime, anywhere with a single device such as a smartphone. The use of AI technology in ODR is being considered and, depending on the progress of AI technology, AI may be able to assist parties’ and experts’ decisions in various situations, such as consideration of solutions, consultation regarding issues and negotiations.

In addition, use of IT for lawsuits is currently underway, and by mid-2025, the Civil Procedure Act will be revised and an online system will be introduced so that all procedures, from filing a lawsuit to judgment can be done online. Furthermore, the conversion of judgments into openly available data, which have only been made available to the public on a limited basis, will begin as early as mid-2023. As a result of these reforms, more information on documents and judgments related to civil lawsuits will be recorded and disclosed online, making it easier to accumulate data, which is expected to lead to further development of AI to support litigation activities.

In Japan, there are no cross-sectoral laws and regulations applicable to AI, only regulations in individual areas of law. However, given that the use of AI often involves the use of personal information, compliance with the APPI is essential. In particular, the APPI is only a minimum set of required rules. Therefore, a more cautious approach is needed for the use of advanced technologies such as AI, depending on the purpose of use and the type of personal information involved.

In addition to legal liability, there is also reputational risk if the use of AI results in discriminatory or unfair treatment.

Ultimately, it is business judgement to decide how to use AI in business in light of these considerations, which falls within the responsibilities of directors. However, since these decisions involve expert judgement, an increasing number of companies are turning to external expert panels or advisory boards on AI.

One AI governance guideline that is expected to be used as a reference for such business judgement is the Governance Guidelines for Implementation of AI Principles Ver. 1.1. Although the guidelines are not legally binding, in order to implement the Social Principles of Human-centric AI, they set forth six action goals for AI providers: conditions and risks analysis, goal-setting, system design (building an AI management system), implementation, evaluation and re-analysis of conditions and risks, along with practical examples.

The Guidelines also emphasise transparency and accountability. It is advisable to regard the information mentioned above as non-financial information in corporate governance codes, and to consider actively disclosing it. However, not many companies are actively disclosing such information at this time.

The social implementation of AI is steadily advancing in Japan. However, there have also been cases in which the purpose or manner of use of AI, while not necessarily illegal, has been publicly criticised for being inappropriate or not adequately explained to users.

As discussed in 8.2 Facial Recognition and Biometrics, in July 2021, a Japanese rail company introduced a security system featuring a facial recognition function to acquire facial information of train customers from images captured by cameras and automatically match them against facial information of potential targets for detection previously registered in a database. The rail company did not fully disclose the detailed operating policies for such a system and faced intense social criticism as a result.

In the future, it will be essential that companies develop and operate AI with fairness, accountability and transparency, among other considerations, and in compliance with relevant laws and regulations. In this regard, the Governance Guidelines for Implementation of AI Principles Ver. 1.1, issued by METI on 28 January 2022, are instructive.

Nagashima Ohno & Tsunematsu

JP Tower
2-7-2 Marunouchi
Chiyoda-ku
Tokyo 100-7036
Japan

+81 3 6889 7000

+81 3 6889 8000

www.noandt.com/en/
Author Business Card

Trends and Developments


Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The firm's TMT practice group is comprised of about 50 lawyers and legal professionals and represents Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. A strength of the firm's TMT practice group is that, in view of its robust client base, it is well-positioned to consistently meet requests from clients to provide a range of advice, from business strategies to daily compliance and corporate matters.

Overview of AI Utilisation in Japan

In Japan, companies in various sectors, such as finance, manufacturing, distribution, healthcare, education and infrastructure have been conducting proof-of-concept (PoC) experiments on the adoption of AI. In recent years, the utilisation of AI in production activities and the provision of services has increased, indicating that the implementation of AI in society is steadily advancing. As well as AI used for general applications such as automatic responses by chatbots and OCR, AI tailored to specific industries and operations is being developed and implemented; for example, AI is being developed and implemented for insurance companies to detect fraudulent claims and maintenance inspections in chemical manufacturing plants.

In addition to companies, some government agencies have also opted to utilise AI-based systems. Discussions are underway on the further introduction of AI, with the Ministry of Justice announcing its basic policy on the promotion of AI-based ODR (online dispute resolution) in March 2022.

At the same time, companies face challenges in actually developing and running AI-enabled technology. The CEO of a prominent Japanese AI start-up noted in an interview with an online news media outlet that many of his clients face challenges in their preparations to implement AI, such as a lack of data to load into the AI and a lack of engineers capable of handling the data. There have also been cases in which the purpose or manner of use of AI, while not necessarily illegal, has been publicly criticised for being inappropriate or not adequately explained to users.

Utilisation of Camera Images

Social issues have arisen in relation to the utilisation of camera images as its social implementation has progressed. For example, in July 2021, a Japanese rail company introduced a security system featuring a facial recognition function to acquire facial information of train customers from images captured by cameras and to automatically match them against facial information of potential targets for detection previously registered in a database. Specifically, the potential targets for detection were “those who have committed serious offences and served prison sentences in the past in JR East facilities”, “wanted suspects” and “loiterers or other suspicious persons”. The rail company did not fully disclose the detailed operating policies for such a system. When newspaper reports revealed that such a system had been installed, the rail company faced intense social criticism. As a result, the rail company decided to exclude those released or paroled from prison from the scope of detection.

To address these issues, the Expert Panel on the Utilization of Camera Images for Crime Prevention and Security, established by the Personal Information Protection Commission, has been holding meetings since January 2022. The Panel discusses the use of cameras with face recognition functionality for crime prevention purposes, including the measures required by the APPI and the measures recommended as voluntary initiatives by providers.

Further, METI, MIC and the IoT Acceleration Consortium revised the Guidebook for Utilization of Camera Images Ver. 3.0, which outlines considerations for the utilisation of camera images, including the analysis of facial images acquired by cameras to estimate age and gender and their use in marketing, and released this version on 30 March 2022. Although the guidelines do not specifically refer to AI utilisation, they present basic views on the utilisation of camera images from the perspective of the protection of privacy and the APPI (see below), and can serve as a reference for the use of camera images in AI development and utilisation. It should be noted that the handling of camera images acquired to identify specific individuals or for crime prevention purposes is beyond the scope of the Guidelines.

The Guidelines specify considerations regarded as essential for the review and implementation of projects that utilise camera images and for establishing a mutual understanding between consumers and businesses that may be captured in camera images. These considerations are summarised under eight specific situations (ie, communication, planning, design, advance notice, acquisition, handling, management and continued use), beginning with the basic principles.

Impact of the Amendment to Act on the Protection of Personal Information

The amended Act on the Protection of Personal Information (APPI), which has been in effect since 1 April 2022, introduced the concept of “pseudonymously processed information”.

Pseudonymously processed information refers to information that has been processed in accordance with certain standards to prevent identification of a specific individual unless it is cross-referenced with other information. The obligations in relation to personal information are relaxed to a certain extent for pseudonymously processed information. In particular, when acquired personal information is processed into pseudonymously information, it may be used for purposes unrelated to the original purpose of its use. Therefore, even personal information whose use as a data set for machine learning training was not included in the original purpose of use can be used for such purpose by processing it into pseudonymised information, and is expected to be used in AI development situations.

However, it should be noted that pseudonymously processed information may not be provided to a third party unless it falls under the exceptions of “outsourcing” to a third party or “joint use” with a third party.

Number of Patent Applications and Trends in AI-related Inventions

The Japan Patent Office (JPO) reports the results of its annual survey of the status of domestic and foreign applications for AI-related inventions (ie, inventions in which AI is applied to various technical fields), the latest edition of which was published in August 2021. In its 2021 report, the JPO updated its findings based on data on newly published applications through April 2021. According to the 2021 report, the number of AI-related invention applications in Japan in 2019 was 5,045, reflecting a continued increase every year since 2014 (ie, 3,065 in 2017 and 4,764 in 2018). However, the JPO has indicated that this is not a significant increase compared to the rate of increase in AI-related invention applications in the USA, China and South Korea.

Application trends by technical area show an increase in applications in the categories of control and regulating systems in general, traffic control and image processing. The scope of applications for AI technology is also expanding. For example, in 2017 there were few AI-related applications in the healthcare sector, but in 2019 there were 134 applications. Most of the high-level AI-related applicants are companies, many of which are in the information and telecommunications, electrical and automotive sectors.

AI Strategy 2022

The Cabinet Office has been formulating an AI strategy for 2019 and beyond with the goal of providing a comprehensive policy package on AI to address Japan’s social challenges and improve the competitiveness of its industries. The latest version of AI Strategy 2022, released on 22 April 2022, outlines the following five strategic objectives and establishes an action plan in line with these strategic objectives:

  • Establish a system and technical infrastructure capable of protecting, to the extent possible, the lives and property of residents against imminent crises such as pandemics and large-scale disasters, and establish a framework for the proper and sustainable operation of such system and technical infrastructure.
  • Develop the most capable human resources for the AI era and develop into a country that can attract talent worldwide, and establish a framework to achieve these objectives sustainably.
  • Become the forerunner in AI applications in real-world industries to enhance industrial competitiveness.
  • Establish a set of technical systems to achieve a “sustainable society that encompasses diversity” and put in place a framework to operate these systems.
  • Under Japan’s leadership, establish an international research, training, and social infrastructure network in AI to accelerate AI research and development, human resource development and achievement of SDGs.

Governmental Guidelines

Governance Guidelines for Implementation of AI Principles Ver. 1.1

On 28 January 2022, the Ministry of Economy, Trade and Industry (METI) issued its Governance Guidelines for Implementation of AI Principles Ver. 1.1 (version 1.0 was issued on 15 January 2021). These guidelines, which the Cabinet Office’s Council for Integrated Innovation Strategy agreed on 29 March 2019, outline what should be put into practice when adhering to the "social principles of human-centric AI".

The social principles of human-centric AI are the basic principles for better social implementation and sharing of AI, consisting of the “basic philosophy”, the “social changes needed to realize Society 5.0” and the “social principles of AI.” Based on the social principles of AI, which society as a whole should take the initiative to achieve, AI developers and service providers should set and comply with their own objectives (AI development and utilisation principles), which should be implemented in accordance with their own purposes and means of AI development and operation, among others. The social principles of AI comprise the following:

  • human-centric principle;
  • principle of education and literacy;
  • principle of privacy protection;
  • principle of ensuring security;
  • principle of fair competition;
  • principles of fairness, accountability and transparency;
  • principle of innovation.

The Governance Guidelines for Implementation of AI Principles Ver. 1.1 are intended to support the implementation of the social principles of human-centric AI by the entities involved in the development and operation of AI-enabled technology. Although not legally binding, they provide action goals to implement, practical examples of each action goal, and examples of how deviations from the action goals are likely to be assessed, thus serving as a useful reference.

2021 Report and Case Studies on AI Governance initiatives

In addition, the 2021 Report, dated 4 August 2021, and Case Studies on AI Governance Initiatives, dated 29 September 2021, issued by the Conference toward AI Network Society of the Ministry of Internal Affairs and Communications (MIC), introduce AI ethics and governance initiatives by business operators. According to these reference materials, several companies involved in AI have established guidelines and principles regarding AI ethics and governance, organisational structure, security, privacy considerations, fairness, transparency and accountability, proper use, quality assurance and development review, and other related initiatives.

Machine Learning Quality Management Guidelines

More specific guidance and guidelines have also been issued in recent years. For example, on 30 June 2020, the National Institute of Advanced Industrial Science and Technology (AIST), a public research institute under the jurisdiction of METI, issued Machine Learning Quality Management Guidelines to assist companies measure and improve the quality of their AI-based products and reduce accidents and financial losses caused by AI misjudgements. These Guidelines cover quality management throughout the lifecycle of AI systems and systematically summarise the actions and inspection items necessary to meet the quality requirements for providing AI system services. The second edition of these Guidelines was issued on 5 July 2021, and chapters on fairness and security have been added.

Information Disclosure Guidelines for Safety and Reliability of Cloud Services Using AI (ASP/SaaS Edition)

On 15 February 2022, the MIC added Information Disclosure Guidelines for Safety and Reliability of Cloud Services Using AI (ASP/SaaS Edition) to its Information Disclosure Guidelines for Safety and Reliability of Cloud Services, which set out the information that should be disclosed regarding cloud services with AI functionality.

AI-related disclosures required include the following:

  • sharing of responsibility related to AI functionality (such as whether or not human judgement is involved and liability for damages based on AI judgement);
  • use of and rights to data and learned model rights, quality (ie, AI accuracy, measures to improve AI accuracy and level of explanation possible);
  • AI-related collaboration;
  • AI-related security measures; and
  • AI performance.

Guidelines on Assessment of AI Reliability in the Field of Plant Safety

On 17 November 2020, the Fire and Disaster Management Agency of MIC, MHLW, and METI issued Guidelines on Assessment of AI Reliability in the Field of Plant Safety and Case Studies on Advanced AI in Plants, aimed at addressing issues related to the introduction of AI in the field of plant safety. The Guidelines on Assessment of AI Reliability in the Field of Plant Safety were revised on 30 March 2021. These Guidelines were formulated to demonstrate how to assess and manage AI reliability in the field of plant safety, explain AI reliability, and establish requirements for AI development. The Case Studies also provide examples of successful AI implementation by plant operators and others, summarise the results of AI implementation, and show how implementation issues can be overcome.

AI Utilization Handbook

In July 2020, the Consumer Affairs Agency issued its AI Utilization Handbook to improve consumers’ basic literacy on the use of AI. The Handbook has a “basic section” and a “checkpoint by service used section”. The basic section outlines the structure, characteristics and limitations of AI, as well as the types of services utilising AI. The checkpoint section provides information on the mechanisms and considerations for each product and service type, as well as data management considerations, taking into account actual consumer use of AI-based services.

Other Recent Developments

Amendment to Installment Sales Act

The Installment Sales Act was amended on 16 June 2020, and came into effect on 1 April 2021. Under the amended Act, credit card companies approved by the Minister of Economy, Trade and Industry may use screening methods using AI, and big data, among others, to replace traditional credit checks based on one-size-fits-all formulas using annual income and other criteria when screening credit card issuances and other credit limit usage. In addition, a registration system for small-scale companies engaged in a credit card business with a credit limit of JPY100,000 or less has been established, allowing small-scale operators, similar to approved companies, to screen credit limits using a registered screening method instead of the traditional screening of projected payments.

Automated Driving

The amended Road Traffic Act and Road Transport Vehicle Act came into effect on 1 April 2020. Under these Acts, “level 3” vehicles (which are fully controlled by AI systems under certain conditions and driven by human drivers in emergencies) can now legally operate on public roads, and requirements for automated driving systems have been established.

Furthermore, on 19 April 2022, a bill to amend the Road Traffic Act was passed to legally allow “level 4” automated driving, in which the AI system does all the driving under certain conditions. The removal of the prohibition against level 4 automated driving under this amendment will enable operators to provide level 4-equivalent automated mobility services with government approval. The bill is expected to come into effect by March 2023.

The amended Road Traffic Act of 2022 also includes rules for automated delivery robots, allowing them to travel in the same areas as pedestrians at speeds of up to 6 km per hour or less.

Algorithms/AI and Competition Policy by JFTC

In March 2021, the Japanese Fair Trade Commission (JFTC) issued its Algorithms/AI and Competition Policy, a report of the Study Group on Competition Policy in Digital Markets. The report’s primary purpose is believed to be to provide the FTC with a better understanding of the changes in the competitive environment caused by algorithms/AI, enabling it to address competition risks associated with algorithms/AI appropriately. The report outlines the issues under the Anti-Monopoly Act and how they can be addressed in relation to the following:

  • algorithms/AI and co-operative activities;
  • ranking manipulation;
  • personalisation; and
  • algorithms/AI and competitiveness (eg, gaining competitive advantage through data accumulation).
Nagashima Ohno & Tsunematsu

JP Tower
2-7-2 Marunouchi
Chiyoda-ku
Tokyo 100-7036
Japan

+81 3 6889 7000

+81 3 6889 8000

www.noandt.com/en/
Author Business Card

Law and Practice

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The firm's TMT practice group is comprised of about 50 lawyers and legal professionals and represents Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. A strength of the firm's TMT practice group is that, in view of its robust client base, it is well-positioned to consistently meet requests from clients to provide a range of advice, from business strategies to daily compliance and corporate matters.

Trends and Developments

Authors



Nagashima Ohno & Tsunematsu is the first integrated full-service law firm in Japan and one of the foremost providers of international and commercial legal services based in Tokyo. The firm’s overseas network includes offices in New York, Singapore, Bangkok, Ho Chi Minh City, Hanoi and Shanghai, and collaborative relationships with prominent local law firms throughout Asia and other regions. The firm's TMT practice group is comprised of about 50 lawyers and legal professionals and represents Japanese major telecom carriers, key TV networks, and many domestic and international internet, social media and gaming companies, not only in transactions but also in disputes, regulatory matters and general corporate matters. A strength of the firm's TMT practice group is that, in view of its robust client base, it is well-positioned to consistently meet requests from clients to provide a range of advice, from business strategies to daily compliance and corporate matters.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.