The White House is gathering opinions to define accountability rules for companies that develop and sell artificial intelligence.
Posted on 12 April 2023 by Redazione
Given the rapid progress of ChatGPT and similar applications, it’s hard to imagine what generative artificial intelligence will become in the future, even in a few months or years. Concerns about possible malicious uses and negative impacts on society (employment, school, safety) are legitimate, especially in the absence of a regulatory framework. While the European Union is still defining the final text of the AI Act (intended to regulate the wider field of artificial intelligence, not only generative), something is happening in the United States: the National Telecommunications and Information Administration (NTIA)internal agency of the Department of Commerce, is studying the possibility of applying control measures to ensure that AI systems are "legal, effective, ethical, safe and trustworthy", reports Reuters.
So there is no rejection, but an invitation to go there with your feet in the air. "Responsible AI systems could bring huge benefits, but only if we address their potential consequences and harms," said Alan Davidson, administrator of the government agency. "For these systems to reach their full potential, businesses and consumers must be able to trust them".
Ntia is considering possible measures to ensure that AI systems "function as they claim, and do not cause harm", and will present the resulting report to the White House shortly. The Presidential Office, in fact, has begun to collect opinions on possible accountability measures for those who develop and use AI systems. It is unclear whether and how much the recent petition sponsored by the Center for Artificial Intelligence and Digital Policy influenced Washington’s moves.
The opinion expressed by President Joe Biden is shared but perhaps a little naive: "Technology companies have, in my opinion, the responsibility to ensure that their products are safe before making them public". The problem is that the intrinsic security of an AI software (the correctness with which it processes data, the ability to defend against cyberattacks, the absence of prejudice) does not exclude possible drifts and malicious uses.
Daniel Zhang, President and CEO of Alibaba Group
The move of Alibaba Cloud
Meanwhile, not a day goes by without some tech company announces news in the field of generative AI. The Chinese giant Alibaba has introduced in its cloud offering a software for language understanding and processing (large language model) that will allow customers to create applications based on artificial intelligence. "We are facing a technology watershed, driven by generative AI and cloud computing, and companies across all industries have begun to embrace the intelligent transformation to stay ahead," said Alibaba Group president and CEO, Daniel Zhang.
That’s not all: the new language model, Tongyi Qianwen, will be progressively integrated into all Alibaba applications aimed at companies, with the aim of improving the user experience and navigation and providing support through chat and search functions. It will support both Chinese and English and will be initially inserted in DingTalk (Alibaba’s UCC platform for remote working collaboration) and in Tmall Genie home automation devices. It is difficult not to find a parallel with the strategy followed by Microsoft for the Azure cloud and the Teams platform.
Tags: artificial intelligence, Ai generativa
- You might also be interested