A.I.
Color palettes of Asia AI Regulations

It’s a collection, unlike anything you’ve seen before.


What if an AI chatbot casually said something unethical, or an AI robot caused an accident that injured a human or destroyed a social facility? We can imagine a dystopian society dominated by AI. That’s why it’s important for humans to design AI responsibly. The lesson here is that humans who develop AI technologies must design them “responsibly.” This is done within a social framework of AI laws, norms, and ethical guidelines. Following in the footsteps of the EU’s AI Act, which is set to become the world’s first officially enforced AI law on August 2, there is a growing movement in Asia to enact AI framework laws. Different countries in Asia have different colors on ‘accountability.’ We’ve compiled a list of trends and developments in AI laws across the region.


ⓒUnsplash




China: State-led AI industry-speciic regulation

China’s approach to AI regulation has two main features. Unlike the EU’s comprehensive AI Act, China regulates specific AI applications or industries with new governing codes. For instance, in 2018, China enacted the Administrative Regulations for AI-Assisted Diagnostic Technology, the Administrative Regulations for AI-Assisted Treatment Technology, and the Opinions on the Development of Internet + Medical Health to govern and supervise AI healthcare programs. Importantly, AI regulation in China is driven by the state. Like China’s internet censorship system, known as the Great Firewall, freedom of expression is limited. For example, AI chatbots in China are not allowed to question President Xi Jinping’s orthodoxy or policies, and neither are search engines and social media platforms. The Interim Measures for the Management of Generative AI, effective as of August 2023, clearly prohibits AI content that promotes state subversion and terrorism.




Japan: Leading the way in creating international norms

Japan is taking the lead in AI regulatory legislation through a national initiative. In November 2023, Japan announced the Hiroshima AI Process agreement at the G7 Summit, which is the first set of international rules for the use of AI. The Hiroshima AI Process emphasizes the benefits of AI technology, while also aiming to address the associated risks and challenges. This includes introducing an authentication mechanism to identify content generated by AI, promoting the development of international technical standards, and protecting personal data and intellectual property rights. Japan has developed this international cooperation framework and launched the Hiroshima AI Process Friends Group in May of this year. The group aims to involve more than 40 countries, including the G7 and EU member states as well as South Korea and Singapore. Japan is envisioning a scenario where cooperating countries take joint responsibility for AI ethics. Through creating an AI-related international cooperation organization, Japan has pledged to advance the discussion of international norms for generative AI with its partners.





Singapore: AI framework for enterprises

Singapore has become the first Southeast Asian country to introduce a national AI strategy. Currently, there are no specific laws governing the use of AI technology across the board in Singapore. Instead, the country has established “tools” such as AI usage guidelines and testing programs to encourage responsible use of AI technology by companies. The Model AI Framework, a set of AI technology guidelines released by Singapore in January 2019, is based on two main principles: AI used in decision-making should be explainable, transparent, and fair, and AI systems should be human-centered. In 2022, the organization also introduced AI Verify, an AI governance testing framework and software toolkit. In March 2021, the AI and Big Data Guide was released to provide a breakdown of AI technologies. In June 2023, the Infocomm and Media Development Authority (IMDA) of Singapore established the AI Verify Foundation. AI Verify is the first AI governance testing framework and software toolkit developed by IMDA in collaboration with various private companies. Its goal is to assist in the development of AI tools that meet the requirements of regulators worldwide and to enhance the capacity for ethical AI testing.




South Korea: Taking a careful approach to Sovereign AI

South Korea has recently initiated discussions about implementing a regulatory law for AI. The country has been taking a cautious approach to AI legislation, and there haven’t been significant developments since the establishment of the Ethical Standards for Artificial Intelligence in December 2020. A seminar was recently held in South Korea, bringing together lawmakers, business officials, academic experts, and citizens to discuss the structure and direction of an effective AI law. They deliberated on whether AI should be regulated or promoted and considered alternatives such as a governance system for each government ministry and the creation of an AI safety institute to enact a basic AI law. There was also discussion about the need for legislation to address the growing social gap resulting from AI development and various conversations on how to create a national AI law that fits Korea’s situation. It’s crucial for South Korea to take the time to include the voices of all stakeholders to get the AI Basic Law right. This is the first step towards securing “sovereign AI” in South Korea, which goes beyond data sovereignty to encompass protecting national competitiveness and cultural identity. Therefore, it’s necessary to consider local culture and institutional characteristics in areas such as public, education, defense, legal, medical, and cultural sectors when enacting the AI Basic Law. The process of constructing the Korean AI Basic Law to protect sovereign AI has begun.




ASEAN: Southeast Asia Joint Statement

How can we ensure the safe, long-term use of AI services across Southeast Asian countries? ASEAN member states have issued the ASEAN AI Governance and Ethics Guide to address this issue. While the EU enforces strict global AI regulations, Southeast Asian countries prioritize cultural differences and seek a more business-friendly approach, opting for a voluntary and moderate stance. The ASEAN AI Guide outlines seven guiding principles aimed at fostering trust in AI design, development, and deployment, as well as ethical AI systems. These principles are: transparency and accountability, fairness and equity, security and safety, robustness and reliability, human-centeredness, privacy and data governance, and accountability and integrity. ASEAN’s voluntary guidelines on AI ethics and governance enable member states to tailor regulations to their specific circumstances.



ⓒGettyimagesbank




AI regulatory laws in Asia have different colors of responsibility, but they will all eventually blend into one color. The guiding principle is that AI should be used for the betterment of human society. This concept of beneficence, driven by the desire to aid others, is rooted in morality. It entails refraining from causing harm, stealing, or engaging in harmful behaviors towards others. To foster ethical AI, humans must first embody ethical values, from the entities driving AI development – such as governments and companies – to the individuals utilizing AI. It is only through a shared sense of responsibility that we can establish and uphold ethical norms for AI.





TAG
2024-07-31
editor
Eunju Lee
share