Discuz! Board

 找回密碼
 立即註冊
搜索
熱搜: 活動 交友 discuz
查看: 2|回復: 0

Camera! Neural network! Cut!

[複製鏈接]

1

主題

1

帖子

5

積分

新手上路

Rank: 1

積分
5
發表於 2024-11-7 16:30:55 | 顯示全部樓層 |閱讀模式
The use of deepfakes and digital images of famous artists and actors is a new challenge for the media industry, said Nikita Danilov, CEO of the National Federation of Music Industry (NFMI), at the CSTB.PRO.MEDIA forum.

Nikita Danilov, NFMI Now AI technologies allow to digitize the image of any artist. “Digital doubles” are practically indistinguishable from the original image and voice.
According to him, the use of deepfakes and synthesized voice could lead to violation of rights, the risk of spreading false information and creating fake news.

Deepfakes are already being actively used to film commercials and TV series, and fraudsters have not ignored this technology, said Anton Nemkin, a member of the State Duma Committee content writing service on Information Policy, IT and Communications, in an interview with RSpectr. Deepfakes are used to create political content that can be used to artificially provoke social tension.



The number of fraudulent schemes will only grow, so it is necessary to include the problems of any media content created with the help of AI in the legal field,” says Anton Nemkin.

Famous people invest in the creation, development and promotion of their image for creative activities, Nikita Danilov reminded. Currently, only photographic images of people are subject to legal protection. The expert believes that

IT IS APPROPRIATE TO DEVELOP LEGAL MECHANISMS FOR THE PROTECTION OF DIGITAL IMAGES OF PEOPLE'S VOICES, BY ANALOGY WITH THE PROTECTION OF OBJECTS OF COPYRIGHT AND RELATED RIGHTS

“We believe that the rights to synthesized voices, digital images of artists and deepfakes should be protected,” he emphasized. The next legal issue, according to the lawyer, is the creation of content using AI.

Nikita Danilov, NFMI:

–  In this part, both the Civil Code and international regulation say that copyright protects what is collected directly by a person, that is, a creator. A creative human contribution to the creation of objects of copyright and related rights is necessary, and everything created with the help of AI is not subject to protection.

Training neural networks using copyrighted objects that exist on the market (films, books, music) requires the consent of the copyright holder, said Nikita Danilov.

However, the question of how the copyright holder will control the use of its objects in neural systems remains open. No supervisory authority will be able to figure out what algorithms such systems use and what objects of rights are used to train neural networks. “Our recommendations to developers are to publish lists of objects used to train AI systems,” the lawyer advised.

ABROAD WILL HELP US

So far, the industry is regulated indirectly and there is no single solution in international practice on how the results of intellectual activity (RIA) created by neural networks should be assessed, Sergei Khaustov, a member of the Association of Lawyers of Russia, emphasized in a conversation with RSpectr. He added that RIA should either belong to the creator of the AI ​​algorithm or directly to the author of the request, or be co-authored by the people who created the works on which the artificial intelligence was trained.

回復

使用道具 舉報

您需要登錄後才可以回帖 登錄 | 立即註冊

本版積分規則

Archiver|手機版|自動贊助|GameHost抗攻擊論壇

GMT+8, 2024-11-23 04:58 , Processed in 0.039757 second(s), 19 queries .

抗攻擊 by GameHost X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回復 返回頂部 返回列表
一粒米 | 中興米 | 論壇美工 | 設計 抗ddos | 天堂私服 | ddos | ddos | 防ddos | 防禦ddos | 防ddos主機 | 天堂美工 | 設計 防ddos主機 | 抗ddos主機 | 抗ddos | 抗ddos主機 | 抗攻擊論壇 | 天堂自動贊助 | 免費論壇 | 天堂私服 | 天堂123 | 台南清潔 | 天堂 | 天堂私服 | 免費論壇申請 | 抗ddos | 虛擬主機 | 實體主機 | vps | 網域註冊 | 抗攻擊遊戲主機 | ddos |