More

    Dogs Are Smarter Than Generative AI Says Meta AI Guru

    Published on:

    The European Union’s main legislative body, the European Parliament, has approved a bill to regulate AI, potentially launching a 27-country bloc. It was the first major economy to enact comprehensive regulations on technology.

    The law, known as the AI ​​Act, restricts the use of AI systems deemed risky, such as facial recognition software. It would also require companies developing AI systems like ChatGPT to disclose more information about the data used to train chatbots.

    Member of the European Parliament based in France voted in favor A new law was enacted on Wednesday. The vote comes as some experts warn that rapid development of artificial intelligence could pose a threat to humanity.

    Also read: US and Europe to release AI code of conduct “in the coming weeks”

    Setting global standards

    European Parliament President Roberta Mezzola said the adoption of the new rules demonstrates Europe’s commitment to the responsible development of AI.

    “Europe has led, and will continue to lead, a balanced, human-centred approach to the world’s first AI law, which will undoubtedly set the global standard for years to come.” ‘” Mezzola said in the video. Posted on Twitter.

    “And all this is fully in line with our intention to become a world leader in digital innovation, based on EU values ​​such as respect for privacy and fundamental rights. All this is led by Europe. It’s about taking power, and we do it responsibly and our way.”

    The current draft is European Parliament’■ The AI ​​Act proposes a risk-based approach to regulating artificial intelligence systems. AI systems are categorized into different levels of risk based on their potential to harm consumers.

    According to the law, the least risky category relates to AI used in video games and spam filters. The riskiest category includes AI that could be used for social scoring, a technique that assigns loans and housing scores to individuals based on their behavior.

    The EU has announced that it will ban such programs. Companies that develop or use so-called high-risk AI will be asked to provide information about how their systems operate. This is done to ensure that AI programs are fair and transparent and do not discriminate against individuals, the regulations say.

    EU chief: Discrimination is a big risk for AI

    EU Commissioner for Competition Margrethe Vestager says ‘guardrails’ like those proposed under AI law could help protect people from some of the biggest AI risks, including discrimination. said.

    For example, AI could be used to decide who gets a mortgage or a job, and these decisions could be based on factors such as race, gender, and religion. she said.

    “probably [the risk of extinction] It might be, but I think it’s highly unlikely. I think the risk of AI is greater than people being discriminated against [against]Then they won’t be seen for what they are,” Vestager said. Said The BBC reported after the European Parliament vote.

    “If it’s a bank that’s using it to decide whether to get a mortgage, or if it’s your municipality’s social service, you want to make sure you’re not being discriminated against. [against] It’s because of your gender, skin color and zip code,” she added.

    On Tuesday, Ireland’s Data Protection Authority (DPC) announced that it had put Google’s planned EU rollout of its AI chatbot, Bard, on hold.Politico report. Google has notified regulators that it plans to launch Bard in the European Union this week.

    But the DPC said it had not received any information from Google about how the company identified and minimized data protection risks to potential users. Regulators are concerned that Bard may collect and use personal data without user consent.

    DPC Deputy Commissioner Graham Doyle said officials were seeking information “as a matter of urgency.” He also asked Google to provide further information on its data protection practices.

    Impressive “Strict AI Guardrails”

    New rules on AI proposed by the European Parliament would limit the use of biometric systems and the indiscriminate collection of user data from social media and CCTV footage for purposes such as facial recognition software.

    The proposal would ban the use of artificial intelligence for mass surveillance and require companies to obtain explicit consent from users before collecting their data. As reported by the BBC, Mr Vestager said:

    “We want to put in strict guardrails so that they are not used in real time, but only in specific situations such as when you are looking for a missing child or when a terrorist is on the run. I think.”

    The EU is ahead of the US and other big Western governments in AI regulation. The bloc has been discussing his AI regulations for more than two years, and since the release of his ChatGPT in November, the issue has taken on new urgency.

    ChatGPT is a large language model chatbot developed by OpenAI that can generate human-level text. The announcement heightened concerns about AI’s potential negative impact on jobs and society, including job turnover and social isolation.

    The United States and China are now beginning to formulate concrete policies to regulate AI.of White House Released a set of policy ideas for regulating AI. And China has already issued new regulations banning the use of AI-generated content to spread “fake news.”

    In May, the leaders of the so-called G7 countries met in Japan and called for the development of technical standards to keep AI “reliable.” They called for an international dialogue on AI governance, copyright, transparency and the threat of disinformation.

    European AI legislation is not expected to come into force until 2025. Her three power branches in the EU – the European Commission, Parliament and the Council – all need to agree on the final version.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here