Close ad

The field of technology is threatened by a number of factors. Users fear, for example, malware or loss of privacy. But according to influential personalities of the technology industry, we should not worry so much about the human factor itself, but rather its connection with artificial intelligence. At this year's World Economic Forum in Davos, executives from a number of major technology companies called for legislative regulation of the industry. What are their reasons for doing so?

“Artificial intelligence is one of the most profound things we as humanity are working on. It has more depth than fire or electricity,” said the CEO of Alphabet Inc. last Wednesday at the World Economic Forum. Sundar Pichai, adding that the regulation of artificial intelligence requires a global processing framework. Microsoft director Satya Nadella and IBM director Ginni Rometty are also calling for the standardization of rules regarding the use of artificial intelligence. According to Nadella, today, more than thirty years ago, it is necessary for the United States, China and the European Union to establish rules determining the importance of artificial intelligence for our society and for the world.

Attempts by individual companies to establish their own rules of ethics for artificial intelligence have in the past met with protests not only from the employees of these companies. For example, Google had to withdraw in 2018 from the secret government program Project Maven, which used technology to analyze images from military drones, after massive backlash. Regarding the ethical controversies surrounding artificial intelligence, Stefan Heumann of the Berlin-based think tank Stiftung Neue Verantwortung says that political organizations, not companies themselves, should set the rules.

The Google Home smart speaker uses artificial intelligence

The current wave of protests against artificial intelligence has a clear reason for this timing. In just a few weeks, the European Union has to change its plans for the relevant legislation. This could include, for example, regulations regarding the development of artificial intelligence in so-called high-risk sectors such as healthcare or transport. According to the new rules, for example, companies would have to document in the framework of transparency how they build their AI systems.

In connection with artificial intelligence, several scandals have already appeared in the past - one of them is, for example, the Cambridge Analytica affair. In the Amazon company, employees eavesdropped on users through the digital assistant Alexa, and in the summer of last year, a scandal erupted again due to the fact that the company Google - or the YouTube platform - collected data from children under the age of thirteen without parental consent.

While some companies are silent on this topic, according to the statement of its vice president Nicola Mendelsohn, Facebook recently established its own rules, similar to the European GDPR regulation. Mendelsohn said in a statement that this was the result of Facebook's push for global regulation. Keith Enright, who is in charge of privacy at Google, said at a recent conference in Brussels that the company is currently looking for ways to minimize the amount of user data that needs to be collected. "But the widespread popular claim is that companies like ours are trying to collect as much data as possible," he further stated, adding that holding data that does not bring any value to users is risky.

The regulators do not seem to underestimate the protection of user data in any case. The United States is currently working on federal legislation similar to the GDPR. Based on them, companies would have to obtain consent from their customers to provide their data to third parties.

Siri FB

Source: Bloomberg

.