Editor's Note

Makoto Shiono, Director of Management at the Institute of Geoeconomics, examines the ethical and security challenges posed by AI and explores opportunities for South Korea-Japan cooperation in shaping global AI governance. He argues that both countries, as leading technology hubs and democracies, should collaborate on establishing shared ethical guidelines, ensuring transparency, and promoting risk-based AI regulations. Shiono underscores the importance of engaging in multilateral platforms like the Hiroshima AI Process and G7 discussions, advocating for joint research, policy coordination, and industry partnerships to enhance AI safety and security while fostering innovation. He further highlights the potential for these shared principles to influence global AI governance frameworks.

I. The Rapid Evolution of AI and the Importance of Addressing Ethical Challenges

 

In recent years, the rapid evolution of artificial intelligence (AI) has transformed a wide range of societal domains, from industry to our daily lives. AI can be considered one of the most impactful technologies on human society since the popularization of the internet in the 1990s. In particular, the advent of generative AI may significantly affect human thought processes and labor environments. When OpenAI released ChatGPT on November 30, 2022, generative AI became accessible to the public. ChatGPT reached one million users within a week (Altman 2022), and within two months, it was estimated to have surpassed one hundred million users (Hu 2023).

 

ChatGPT features a conversational interface that appears like human-to-human interaction. Technically, it is built by combining a Large Language Model (LLM) and Reinforcement Learning from Human Feedback. The technology that enables ChatGPT’s advanced text generation is called the “Transformer,” a systern composed of the Attention Mechanism and Multi-Layer Perceptrons (MLPs). Introduced in 2017 by Ashish Vaswani and colleagues at Google in “Attention is All You Need,” the attention mechanism extracts necessary information from a series of words, while the MLP retrieves relevant learned content from large-scale data during processing. By iteratively predicting the most likely next word, Transformers generate coherent text.

 

LLMs such as ChatGPT are increasingly integrated into our daily lives. LINE Yahoo, a Japanese internet company, is funded through a joint holding company by Japan’s SoftBank and South Korea’s NAVER—well known in Japan. NAVER has developed and operates HyperCLOVA X, an LLM designed to understand Korean language in social contexts. NAVER reports that HyperCLOVA X uses 6,500 times more Korean-language data than OpenAI’s GPT-4, making it highly capable of reflecting Korean culture, social norms, and values. HyperCLOVA X can generate responses that resonate with Korean users, reflecting social norms in the language model’s outputs. Although HyperCLOVA X also handles English, Japanese, and Chinese, its primary focus on high-quality Korean data ensures that Korea’s socio-cultural values are embedded in its systern. As illustrated by HyperCLOVA X, LLMs trained on vast linguistic datasets tend to reflect the knowledge and values contained in those languages, including ethical norms of the societies where those languages are used.

 

A common challenge with LLMs is that their generated content can contain inaccuracies or refer to nonexistent entities—a phenomenon known as “hallucination.” Because LLMs predict the most statistically likely next word, they can generate combinations that do not exist. Errors may also arise if the data used for training contains inaccuracies. As AI is developed using such learning processes, some degree of error becomes inevitable. Accordingly, global policy interventions are needed to address potential social problems caused by AI.

 

Before the era of generative AI, AI was used for functions such as facial recognition and analyzing user preferences in video applications like YouTube or TikTok to maximize viewing time. AI has now become so pervasive in everyday software that we have reached a point where we cannot always discern whether our counterpart in a conversation is human or AI. As AI advances, it is increasingly critical to develop it with respect for human values, privacy, and fairness to prevent negative impacts on human society. All nations should cooperate to address the ethical issues surrounding AI. This article will discuss recent AI ethics debates in Japan and consider possibilities for collaboration between South Korea and Japan in the domain of AI ethics.

 

II. AI Risks and the Development of Japan’s AI Guidelines

 

To harness AI’s potential while mitigating its risks, it is essential to establish guidelines that ensure safe and ethical use of AI technologies. These guidelines provide frameworks for developers, businesses, and policymakers, promoting accountability, transparency, and public trust. In Japan, AI guidelines play a crucial role in ensuring that AI aligns with societal values and priorities.

 

In April 2024, the Japanese government released the “AI Business Operators’ Guidelines (METI 2024),” which consist of six detailed annexes designed to translate the main guidance into practical measures. These guidelines serve as a comprehensive reference for the various stakeholders implementing AI-related initiatives. The government views itself as having taken a leading role in global discussions on AI principles—within the G7, G20, OECD, and other international forums—and believes that establishing these guidelines enables Japan to promote the correct identification of AI risks, guided by international trends and stakeholder concerns. The ultimate goal is to facilitate voluntary, practical measures in everyday life.

 

Japan’s AI Business Operators’ Guidelines integrate and build upon “Human-Centered AI Society Principles (2019),” “AI Development Guidelines (2017),” “AI Utilization Guidelines (2019),” and “Governance Guidelines for Implementing AI Principles (2022).” The 2019 “Human-Centered AI Society Principles,” formulated by the government’s Council for Integrated Innovation Strategy, aim to utilize AI for addressing global challenges such as environmental issues, widening inequality, and resource depletion—an approach referred to as “Society 5.0.” In the Fifth Science and Technology Basic Plan, Society 5.0 is described as a “human-centered society that balances economic advancement with the resolution of social problems through a highly integrated systern of cyberspace and physical space.” Here, the government emphasizes harmonizing economic development with the resolution of societal challenges.

 

The core ideas of the Human-Centered AI Society Principles include:

 

1. Respect for human dignity—AI should augment and enhance human capabilities.

 

2. Diversity and inclusiveness—enabling diverse individuals to pursue their own forms of happiness.

 

3. Sustainability—leveraging AI to address issues such as inequality and environmental degradation.

 

Based on these ideas, the Principles outline seven guidelines: (1) Human-Centric Principle, (2) Principle of Education and Literacy, (3) Principle of Privacy Protection, (4) Principle of Security, (5) Principle of Fair Competition, (6) Principle of Fairness, Accountability, and Transparency, and (7) Principle of Innovation. The Japanese government states that these guidelines will be periodically updated to reflect changes in international trends and new technologies.

 

The AI Business Operators’ Guidelines emphasize three core concepts:

 

1. Support for businesses’ voluntary efforts,

 

2. Coordination with international discussions,

 

3. Clarity for readers.

 

They are intended to ensure effectiveness and legitimacy through repeated multi-stakeholder review—including educational and research institutions, civil society, and private companies—and function as a living document that will continue to evolve. The Japanese government seeks consistency between the Society 5.0 vision and human-centered AI principles, motivated in part by Japan’s demographic challenges such as a rapidly aging population and labor shortages.

 

In 2023 and 2024, the Japanese government accelerated its AI policy. An “AI Strategy Council,” chaired by Professor Yutaka Matsuo of the University of Tokyo, leads governmental discussions. The council set a budget of 164.09-billion-yen (an increase of 44% from the previous fiscal year) for AI initiatives and decided to subsidize one-third to one-half of major firms’ costs to expand GPU (graphics processing unit) infrastructure—a critical AI resource. In 2023, the council compiled “Interim Points of Discussion on AI,” focusing on recent developments in generative AI. This report notes Japan’s long-standing affinity for technology and AI, suggesting that despite prolonged economic stagnation, the advent of AI could provide renewed momentum for growth. The report also identifies seven primary risks:

 

1. Leakage of Confidential Information and Misuse of Personal Data: Generative AI may collect user interaction data and use it for targeted advertising. Moreover, when training on internet data, AI systerns risk gathering personal information improperly.

 

2. Increased Ease and Sophistication of Crime: Generative AI can facilitate illegal activities, such as creating realistic deepfake voices or images at low cost, potentially enabling fraud or drug production. While Japan’s Penal Code and anti-hacking laws may cover some cases, emerging offenses require new legal measures.

 

3. Social Disruption through Misinformation: Generative AI can effortlessly produce and disseminate fake news or biased information, causing public unrest or interference in democratic processes. Tools to detect misinformation and curb its spread are necessary.

 

4. More Sophisticated Cyberattacks: AI-assisted cyberattacks may become more advanced. Generative AI can help craft emails that evade detection, impersonate humans, or even target AI systerns themselves.

 

5. Impacts on Education: Using AI to complete homework or write reports may hinder students’ creativity. Yet AI-based personalized learning could enhance educational outcomes. The Ministry of Education must urgently establish guidelines and boost AI literacy.

 

6. Copyright Violations: AI-generated content can closely resemble existing works, raising potential for increased copyright infringements. The government should promote awareness of current copyright law and consider further regulation of AI-generated outputs.

 

7. Growing Risk of Unemployment: Generative AI’s capacity to automate creative tasks, such as writing and image creation, could lead to job losses. The government should study AI’s impact on employment and promote reskilling and labor mobility.

 

III. South Korea’s National AI Strategy

 

In 2019, the South Korean government released its National AI Strategy, in which Choi Ki-yong, Minister of Science and ICT, emphasized the realization of “human-centered AI.” In 2020, the Ministry of Science and ICT and the Korea Information Society Development Institute introduced the “National AI Ethics Framework,” which sets out basic and comprehensive standards for all members of society to observe when developing and deploying AI (Korean Institute of Science and Design Information n.d.).

 

The Framework’s highest value is humanity, and under three core principles—(1) Human Dignity, (2) Public Good, and (3) Technological Appropriateness—lie ten key requirements that span the AI lifecycle: (1) Guarantee of Human Rights, (2) Privacy Protection, (3) Respect for Diversity, (4) Non-Maleficence, (5) Publicness, (6) Solidarity, (7) Data Governance, (8) Responsibility, (9) Safety, and (10) Transparency.

 

In 2020, South Korea also published an AI Roadmap containing thirty priorities, including standards for high-risk AI. Notably, one provision discusses the possibility of granting legal personhood to AI. The government has aggressively steered AI ecosystern development, with President Yoon Suk-yeol in April 2024 declaring South Korea’s aim to become one of the top three AI nations. Given that Korea’s National AI Strategy highlights “human-centered AI,” the fundamental direction of Korean AI ethics does not differ markedly from Japan’s, suggesting that they share sufficient common ground for collaboration.

 

IV. The Role of Risk-Based Approaches and the “Hiroshima AI Process”

 

Japan’s AI Business Operators’ Guidelines adopt a risk-based approach, which involves identifying potential societal harms—such as discrimination or misinformation—early on and evaluating their severity and likelihood. Measures are then taken according to the level of risk, with ongoing risk management throughout the AI lifecycle. Multiple stakeholders participate in this process, and transparency is emphasized. By tailoring regulations to different levels of risk, governments aim to avoid inhibiting innovation through overly restrictive laws, while still maximizing AI’s social benefits. This approach is widely accepted internationally, including by the OECD and the G7, facilitating global consensus.

 

A critical diplomatic endeavor for advancing international cooperation on AI safety and ethics was Japan’s “Hiroshima AI Process (2023).” Conducted the year before Japan finalized its AI Business Operators’ Guidelines, the Hiroshima AI Process was a key step toward establishing international guidelines. At the April 2023 G7 Digital and Technology Ministers’ Meeting, (G7 Digital and Technology Ministers’ Meeting 2023) participants adopted a ministerial declaration highlighting “Responsible AI and AI Governance.” They reaffirmed that “AI policies and regulations should be human-centric, uphold human rights and fundamental freedoms (including privacy and personal data protection) consistent with democratic values, and be risk-based and forward-looking.”

 

In September 2023, the G7 Digital and Technology Ministers’ Statement further recognized the need to address the risks posed by rapid AI advances while harnessing their benefits. The October 2023 G7 Leaders’ Statement on the Hiroshima AI Process (Ministry of Foreign Affairs of Japan 2023) likewise underscores the “innovative opportunities” and “transformational potential” of cutting-edge AI systerns, demonstrating the global focus on generative AI at that time.

 

The document published under the Hiroshima AI Process, titled “Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI systerns,” (Japanese Government 2023) begins by noting its goal of promoting safe, secure, and trustworthy AI worldwide. It underscores that “organizations should not develop or deploy advanced AI systerns in a way that undermines democratic values, are particularly harmful to individuals or communities, facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights.”

 

The Hiroshima AI Process highlights democratic values, ethical AI practices, risk management, and international cooperation. This initiative contributed significantly by issuing a guiding framework to address the global evolution of AI. Such emphasis on democratic values could serve as a foundation for Japan–South Korea cooperation in AI ethics.

 

V. The Evolution of AI Policy in Japan

 

Japan’s AI development environment differs from that of U.S.-style Big Tech companies or China’s state-regulated private firms. The “Artificial Intelligence Technology Strategy,” (Cabinet Office, Government of Japan 2017) published by the Japanese government in March 2017, noted that Japan lags the U.S. and China in AI-related academic publications, and placed priority on creating robust research and development environments involving both government and private sectors, as well as addressing Japan’s pressing shortage of AI talent. The strategy categorized AI’s development into three phases:

 

Phase 1: Promoting data-driven AI utilization in specific domains.

 

Phase 2: Advancing broader AI and data use across boundaries of individual fields.

 

Phase 3: Linking multiple domains in complex ways to form an AI ecosystern.

 

It also outlined three main approaches to cultivating AI talent:

 

1. Designing and implementing educational programs to produce immediately deployable personnel.

 

2. Encouraging joint research and workforce development collaborations between universities and industry.

 

3. Building on government and research institutions’ initiatives to further strengthen these programs.

 

Japan does not rely primarily on a U.S.-style start-up ecosystern fueled by massive venture capital investments. Instead, its approach involves close collaboration among universities, industry, and government for AI research, development, and commercialization. In May 2015, the National Institute of Advanced Industrial Science and Technology (AIST) established the “Artificial Intelligence Research Center.” Professor Junichi Tsujii, a leading expert in natural language processing and text mining, was appointed as the inaugural director. Tsujii noted that, as AI permeates all industrial sectors, failure to keep up would jeopardize entire industries.

 

Japan’s AI development agenda strongly reflects national social conditions. According to the “Artificial Intelligence Technology Strategy,” Japan is on track to experience the world’s first major wave of rapid population aging. Leveraging AI to transform immense medical and caregiving data into a “world-leading advanced healthcare and caregiving systern” is a prominent goal. By 2030, more than 40% of Japan’s population will be seniors, and the government envisions a society where people remain actively employed even at age 80. Thus, AI is expected to solve social issues such as population aging and labor shortages. The same objectives appear in the government’s “Human-Centered AI Society Principles (2019),” which aims to utilize AI in addressing environmental crises, inequality, and resource depletion. Japan’s focus on “human-centeredness” thus underpins its AI ethics. Although South Korea’s concerns differ to some extent—for example, AI is connected to military capabilities due to demographic challenges—South Korea and Japan share pressing demographic concerns.

 

VI. AI Business Guidelines and Ethical Standards for AI Development

 

Anticipating AI’s widespread business applications, the Japanese Society for Artificial Intelligence (JSAI) discussed forming an ethics committee as early as 2014. At that time, Hitoshi Matsubara, then President of JSAI and professor at Future University Hakodate, requested Yutaka Matsuo at the University of Tokyo to consider establishing such a committee. The first meeting of the JSAI Ethics Committee was held at the University of Tokyo in December 2014 (Japanese Society for Artificial Intelligence, Ethics Committee 2015).

 

Initially, the JSAI debated whether to call it an “Ethics Committee” or the “Committee on AI and Future Society.” In fact, a JSAI Ethics Committee had existed since 2007 but had become inactive. Its reestablishment in 2014 attracted considerable media attention. Notably, the Ethics Committee included the science fiction writer Satoshi Hase, illustrating the JSAI’s broad-based approach to future challenges. In 2017, the Ethics Committee released the “Ethical Guidelines of the Japanese Society for Artificial Intelligence” (Japanese Society for Artificial Intelligence 2017). These guidelines center on professional ethics for researchers, but Article 9 states that if AI “is to become a member of society or its equivalent, it must adhere to the same ethical guidelines as JSAI members,” thereby extending ethical obligations to AI artifacts themselves.

 

A decade later, in 2024, Japan’s Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry published the “AI Business Operators’ Guidelines,” rooted in the 2019 government principle of a “Human-Centered AI Society.” Also in 2019, the Ministry of Internal Affairs and Communications (MIC) published the “AI Utilization Guidelines” (Ministry of Internal Affairs and Communications 2019). That document notes that in 2016, at the G7 ICT Ministers’ Meeting (hosted by Japan), the Japanese government proposed an initial draft of AI development principles, and in 2018, Japan’s representatives introduced its “Human-Centered AI Society Principles,” “AI Development Guideline Draft,” and “AI Utilization Guidelines Draft” at the OECD AI Expert Group. Japan contends that the 2019 OECD Recommendation on Artificial Intelligence aligns with Japanese principles and guidelines.

 

Japanese private-sector initiatives for AI ethics have accelerated since then. In 2018, Sony Group announced its “Sony Group AI Ethics Guidelines” (Sony Group 2023), applicable to all officers and employees engaged in AI development and use. In 2019, NEC Group established its “AI and Human Rights Policy” (NEC Group 2019). In 2021, Hitachi formulated its “AI Ethics Principles,” reflecting both OECD discussions and the company’s Social Innovation business ethos.

 

In sum, Japan’s academic circles, government bodies, and private firms have adopted a risk-based approach, identifying high-risk domains and focusing on them. As for regulation, Japan does not have a comprehensive AI law; it relies on government-issued guidelines and private-sector codes of conduct. Japan has thus far refrained from enacting legislation specifically targeting AI, opting instead for a soft law approach. However, there has been a slight shift in this strategy. Building on the principles of “balancing risk management with the promotion of innovation” and “international cooperation,” the government is scheduled to submit a bill in January 2025 that will enable investigations into AI operators. An approach of soft law in contrast to the EU’s more stringent “hard law.” The EU, set to enforce its AI Act around 2026, seeks to protect human rights and freedoms by regulating training data and categorizing AI into four groups—unacceptable, high-risk, transparency-required, and minimal or no risk. High-risk and transparency-required systerns face mandatory obligations. Meanwhile, in the United States, the White House issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” calling for standards, testing requirements, measures against algorithmic discrimination, and privacy protection, while aiming to foster innovation and competitiveness.

 

In November 2024, South Korea’s Science, ICT, Broadcast, and Communications Committee approved the “Artificial Intelligence Industry Promotion and Trust Assurance Act” for parliamentary debate. This legislation would require that AI-generated works be labeled as AI-generated and imposes fines of up to 30 million won for non-compliance. South Korea’s approach appears more aligned with the EU’s hard law, potentially mandating strict safety standards for AI systerns. However, due to a declaration of martial law in December 2024 and the impeachment crisis of the president, the bill’s fate is uncertain.

 

VII. The Future of Cooperation Between South Korea and Japan on AI Ethics

 

As two democratic nations, South Korea and Japan share broad values—particularly the concepts of “human-centeredness,” “diversity,” and “sustainability” at the core of their AI ethics principles. South Korea clarifies commitments to human rights and the public good via its National AI Strategy and the “National AI Ethics Framework,” while Japan bases its approach on the “Human-Centered AI Society Principles” and the “AI Business Operators’ Guidelines,” employing a risk-based approach rooted in international consensus. Both countries should proactively participate in global discussions—through platforms like the Hiroshima AI Process, the G7, and the OECD—to influence and align with emerging international norms. In December 2024, the G7 reached an agreement on the fundamental aspects of a framework requiring AI developers to report risks. Implementation of this framework is also scheduled to commence in February 2025.

 

On the global stage, the war in Ukraine and other conflicts have ushered in the use of drones and other new weaponry, sparking concerns about AI’s role in modern warfare. For some time, the international community has debated the risks of lethal autonomous weapons systerns (LAWS), which delegate decision-making to AI. South Korea, facing tensions with North Korea, has stationed surveillance robots along the Demilitarized Zone (DMZ). As its defense industry thrives, South Korea supplies weaponry—particularly in Europe, which is wary of Russia’s border—and may continue to expand in this domain. In contrast, many Japanese private firms have exited the defense sector. Hence, South Korea has more concrete experience and expertise in weapons-related AI, enabling a potentially significant contribution to international discourse on the ethics of AI-equipped weapon systerns.

 

In February 2024, Japan established the AI Safety Institute, appointing Akiko Murakami as Executive Director. This institute evaluates AI safety methods, sets standards, and facilitates Japan’s international cooperation. The 2024 report of the Council for Integrated Innovation Strategy mentions three key areas for strengthening: (1) Integrated strategies for critical technologies, (2) Global collaboration, and (3) Enhancing AI competitiveness while ensuring safety and security. Specifically, Japan aims to take the lead in international rulemaking for AI. To achieve this, the government plans to strategically leverage bilateral and multilateral frameworks with allies and like-minded nations, as well as ASEAN partners, in coordination with industry and academia. From a South Korean standpoint, these efforts suggest avenues for deeper cooperation with Japan.

 

As neighboring countries with closely intertwined industries, South Korea and Japan hold complementary strengths in technological and social contexts. With the global rise of generative AI, how each country embeds its language and socio-cultural norms into AI models will become a key issue—one that underscores the importance of “AI sovereignty.” Standing apart from the AI giants of the U.S. and China, South Korea and Japan can collaborate to highlight the importance of national sovereignty in AI, reflecting distinct socio-cultural norms and shared demographic concerns, including low birth rates and aging populations.

 

Joint efforts between South Korea and Japan in AI ethics could yield a collaborative ecosystern. Potential avenues include refining mutual guidelines and technical standards, sharing data, undertaking joint research and development, and reinforcing coordination in multilateral forums to shape the rules of AI governance. Such cooperation would accelerate the realization of an AI-powered society characterized by “trustworthiness,” “transparency,” and “accountability,” reinforcing the two countries’ leadership standing in international discourse. Their shared democratic values, emphasis on ethical principles, practical applications for solving social challenges, and active engagement in international norm-setting form a robust foundation for a “human-centered AI society.” By working together, South Korea and Japan can present an advanced model of AI ethics to the global community and thereby contribute to shaping a more equitable international order.

 

References

 

Altman, Sam. 2022. “ChatGPT Launched on Wednesday. Today It Crossed 1 Million Users!” X, December 5. https://x.com/sama/status/1599668808285028353 (Accessed December 22, 2024).

 

Cabinet Office, Government of Japan. 2017. Artificial Intelligence Technology Strategy 2017. https://www.ai-japan.go.jp/menu/learn/ai-strategy-1/75ddfd6ab65e8bcd6fe80e4676d902967c53ca4d.pdf (Accessed December 22, 2024).

 

G7 Digital and Technology Ministers’ Meeting. 2023. Ministerial Declaration. April 30. Accessed December 22, 2024. https://www.digital.go.jp/assets/contents/node/information/field_ref_resources/efdaf817-4962-442d-8b5d-9fa1215cb56a/5c1391d9/20230519_news_g7_results_japanese_00.pdf.

 

Hu, Krystal. 2023. “ChatGPT Sets Record for Fastest-Growing User Base - Analyst Note.” Reuters, February 3. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. (Accessed December 22, 2024).

 

Japanese Government. 2023. Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI systerns. https://www8.cao.go.jp/cstp/ai/ai_senryaku/6kai/kokusaishishin.pdf. (Accessed December 22, 2024).

 

Japanese Society for Artificial Intelligence, Ethics Committee. 2015. Purpose for the Establishment of the Ethics Committee of the Japanese Society for Artificial Intelligence. https://www.ai-gakkai.or.jp/ai-elsi/about/purpose. (Accessed December 22, 2024).

 

Japanese Society for Artificial Intelligence. 2017. Ethical Guidelines. https://www.ai-gakkai.or.jp/ai-elsi/wp-content/uploads/sites/19/2017/02/%E4%BA%BA%E5%B7%A5%E7%9F%A5%E8%83%BD%E5%AD%A6%E4%BC%9A%E5%80%AB%E7%90%86%E6%8C%87%E9%87%9D.pdf. (Accessed December 22, 2024).

 

Korean Institute of Science and Design Information (KISDI). n.d. Artificial Intelligence Standards (In Korean). https://ai.kisdi.re.kr/aieth/main/contents.do?menuNo=400029. (Accessed December 22, 2024).

 

Ministry of Economy, Trade and Industry (METI). 2024. “AI Business Operators’ Guidelines (Version 1.0) (In Japanese).” April 19. https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_3.pdf. (Accessed December 22, 2024).

 

Ministry of Foreign Affairs of Japan. 2023. G7 Hiroshima Summit Outcome Documents (In Japanese). https://www.mofa.go.jp/mofaj/files/100573465.pdf. (Accessed December 22, 2024).

 

Ministry of Internal Affairs and Communications. 2019. AI Utilization Guidelines (In Japanese). https://www.soumu.go.jp/main_content/000624438.pdf. (Accessed December 22, 2024).

 

NEC Group. 2019. AI and Human Rights Policy (In Japanese). https://jpn.nec.com/press/201904/images/0201-01-01.pdf. (Accessed December 22, 2024).

 

Sony Group. 2023. AI Ethics Activities (In Japanese). https://www8.cao.go.jp/cstp/ai/ningen/r5_1kai/siryo1.pdf. (Accessed December 22, 2024).

 


 

Makoto Shiono is Group Head for Emerging Technologies and Director of Management at the Institute of Geoeconomics, International House of Japan.

 


 

Typeset by Chaerin Kim, Research Assistant
    For inquiries: 02 2277 1683 (ext. 208) | crkim@eai.or.kr
 

Major Project

Center for Japan Studies

Detailed Business

Korea-Japan Future Dialogue

Keywords

Related Publications