登录注册
人工智能监管不断发酵,马斯克等人提倡加入水印系统
牛🐮🐮🐮
2023-04-01 13:30:46
前几天全网都在关注马斯克等1000名名人签署暂停GPT5的训练,可没人将内容翻译出来。原文如下:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

这篇文章其实他们给了很多监管方面的建议,翻译出来后发现,他们重点提及了监管方面的措施之一:水印系统。

整篇文章翻译出来,内容如下:

具有与人类相媲美的智能的人工智能系统,可能对社会和人类产生深远的风险,这已经得到了大量研究[1]的证实,并得到了顶级AI实验室的支持[2]。正如广泛认可的阿西洛马人工智能原则所述,先进的人工智能可能会给地球生命历史带来深刻的变革,因此应该用相应的关注和资源进行规划和管理。遗憾的是,即使在近几个月里,AI实验室在开发和部署越来越强大的数字智能方面陷入了失控的竞赛,而这些数字智能甚至连它们的创造者都无法理解、预测或可靠地控制,这种规划和管理的水平也没有得到实现。

当代AI系统现已在通用任务方面具有与人类竞争的能力[3],我们必须问自己:我们是否应该让机器充斥着我们的信息渠道,传播宣传和虚假信息?我们是否应该将所有工作自动化,包括那些有成就感的工作?我们是否应该开发可能最终超越、取代我们的非人类智能?我们是否应该冒着失去对文明的控制的风险?这些决策不能交由未经选举的科技领袖来做。只有在我们确信AI系统的影响将是积极的,风险将是可控的情况下,才能开发强大的AI系统。这种信心必须有充分的理由,并随着系统潜在影响的增大而增加。OpenAI最近关于通用人工智能的声明指出:“在某些时候,可能需要在开始训练未来系统之前获得独立审查,而且各个最先进的系统训练尝试,应该就限制用于创建新模型的计算增长速度达成一致意见。”我们同意。那个时候就是现在。

因此,我们呼吁所有AI实验室立即暂停至少6个月的时间,停止训练比GPT-4更强大的AI系统。这种暂停应该是公开的、可验证的,并包括所有关键参与者。如果这样的暂停无法迅速实施,政府应该介入并实行禁令。

AI实验室和独立专家应利用这段暂停时间共同制定并实施一套先进的、共享的AI设计和开发的安全协议,这些协议应由独立的外部专家严格审计和监督。这些协议应确保遵守它们的系统在排除合理怀疑之后是安全的[4]。这并不意味着暂停AI发展,而只是从危险的竞赛中退一步,避免发展出具有突现能力的更大、更不可预测的黑箱模型。

AI研究和发展应该重新关注如何使当今强大的、处于技术前沿的系统更加精确、安全、可解释、透明、稳健、一致、值得信赖和忠诚。

与此同时,AI开发者必须与政策制定者合作,大力加快AI治理系统的发展。这些至少应包括:致力于AI的新的、有能力的监管机构;对高能力AI系统和大量计算能力的监督和追踪;源头和水印系统,以帮助区分真实与合成以及追踪模型泄漏;强大的审计和认证生态系统;对AI造成的损害承担责任;技术AI安全研究的充足公共资金支持;以及应对AI带来的严重经济和政治颠覆(特别是对民主制度)的有充足资源的机构。

人类可以与AI共享繁荣的未来。在成功创造出强大的AI系统后,我们现在可以享受一个“AI夏天”,在这个时期,我们可以收获硕果,为所有人带来明确的利益,让社会有机会适应。社会已经在其他可能对社会产生灾难性影响的技术上按下了暂停键[5]。我们在这里也可以做到。让我们享受一个漫长的AI夏天,而不是毫无准备地跳入秋天。

(除了马斯克等人在提倡加水印,其实中国清华大学人工智能国际治理研究院副院长、人工智能治理研究中心主任梁正在接受采访时表示,针对目前潜在的风险及监管隐患,加入“数字水印”或将成为一种解决方案。数字水印是指将特定的信息嵌入数字信号中,且不影响原载体的使用价值,也不容易被探知和再次修改。数字水印人类无法看到,但是计算机可以。ChatGPT的所属公司OpenAI方面也表示,考虑在ChatGPT中添加水印,以降低模型被滥用带来的负面影响。根据媒体此前报道,OpenAI已宣布推出名为AI文本检测器(AI Text Classifier)的新工具,来辅助辨别文本到底是人类编写的,还是AI编写的。在最新研究中,水印已经被用来识别人工智能生成的文本,准确率很不错。例如,美国马里兰大学的研究人员使用他们构建的一种水印(检测)算法,可以识别出由Meta的开源语言模型OPT-6.7B创建的文本。)

监管势在必行,英国、意大利、联合国教科文组织等都开始提倡对人工智能的开发和使用进行监管。对文档、音频、图片、视频加入水印以达到震慑和溯源的作用或是最行之有效的方法!


作者利益披露:转载,不作为证券推荐或投资建议,旨在提供更多信息,作者不保证其内容准确性。
声明:文章观点来自网友,仅为作者个人研究意见,不代表韭研公社观点及立场,站内所有文章均不构成投资建议,请投资者注意风险,独立审慎决策。
S
汉邦高科
S
数码视讯
S
三六零
S
国投智能
S
昆仑万维
工分
2.83
转发
收藏
投诉
复制链接
分享到微信
有用 3
打赏作者
无用
真知无价,用钱说话
0个人打赏
同时转发
评论(2)
只看楼主
热度排序
最新发布
最新互动
  • 干就完鸟
    明天一定赚的龙头选手
    只看TA
    2023-04-01 21:39
    水印是好事。能阻止滥用模型
    0
    0
    打赏
    回复
    投诉
  • 1
前往