More

    Japan Won’t Penalize Developers in New AI Guidelines

    Published on:

    Japan Times reports that government and ruling party officials say Japan will not impose penalties on companies that do not comply with the next generation AI guidelines, a list of 10 principles for the use and development of AI.

    Instead, the guidelines will focus on encouraging AI developers to be more responsible. The idea is to accelerate the development of AI, rather than stifle innovation and economic growth, by imposing tough penalties and rules on startups that violate them, according to the report.

    Also read: Japan takes aim at AI overdependence and bias in draft rules

    Japan considers AI certification system

    According to officials, the guidelines “enumerate 10 principles, including observance of the Constitution, respect for human dignity, protection of privacy, and the need to ensure transparency in data learning.”

    Japan also wants to prevent situations in which personal information is leaked during AI training. Therefore, the government plans to ask companies to prevent users from relying too heavily on AI. We also don’t want companies to share users’ personal information with third parties without their permission.

    To prevent privacy violations, Japanese authorities are considering “introducing a certification system” to protect user data and increase transparency from AI developers, Japan Times I have written. The regulation also covers about eight industries considered to be at high risk for the use of AI, such as finance, medicine, and broadcasting, it added.

    The guidelines, expected to be finalized by the end of the year, will only apply to companies building generative AI systems, such as OpenAI’s ChatGPT, rather than general users.

    Japan is looking to AI to boost economic growth, address labor shortages and become a leader in advanced chips. The government is reportedly supporting a company called Lapidus to manufacture high-tech chips as part of an industrial policy aimed at restoring Japan’s technological leadership.

    Not following the EU’s “strict” example

    The development of generative AI by companies such as OpenAI and Anthropic is stirring both fear and excitement because of the potential impact this technology could have on our economy and society. Japan is mainly catching up with countries such as the United States and the European Union (EU).

    This may explain why Asian countries are taking a more relaxed or flexible approach to AI regulation. This stance is in direct contradiction to the EU’s stricter AI laws, which it hoped would serve as a blueprint for other countries to follow.

    Europe’s draft AI regulations have been criticized by the US State Department, which warns they could discourage investment in emerging technologies and favor large AI companies over smaller competitors. It said some rules in the law are based on “vague or undefined” terms. according to To Bloomberg.

    Professor Yutaka Matsuo of the University of Tokyo, who is also the chair of the Japanese government’s AI Strategy Council, previously said the EU’s draft AI law was “too strict” and that it would be “nearly impossible” to specify copyrighted works used for deep learning. He said that.

    “For the EU, the issue is not how to promote innovation, but the issue is already about holding big companies accountable,” Matsuo said, according to a Reuters report.

    Japan’s computing power, defined as the availability of graphics processing units (GPUs) used to train AI, is far behind that of the United States, Matsuo said.

    “Even if we increased the number of GPUs in Japan by a factor of 10, it would probably still be less than what OpenAI has available,” he added.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here