EU police force Europol on Monday warned about the potential misuse of artificial intelligence-powered chatbot ChatGPT in phishing attempts, disinformation and cybercrime, adding to the chorus of concerns ranging from legal to ethical issues.
Since its release last year, Microsoft-backed OpenAI's ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products.
"As the capabilities of LLMs (large language models) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook," Europol said as it presented its first tech report starting with the chatbot.
In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across the organisation to explore how criminals can abuse LLMs, as well as how it may assist investigators in their daily work.
"ChatGPT's ability to draft highly realistic text makes it a useful tool for phishing purposes," Europol said. With its ability to reproduce language patterns to impersonate the style of speech of specific individuals or groups, the chatbot could be used by criminals to target victims, the EU enforcement agency said.
It said ChatGPT's ability to churn out authentic sounding text at speed and scale also makes it an ideal tool for propaganda and disinformation. "It allows users to generate and spread messages reflecting a specific narrative with relatively little effort."
此外,即使犯罪分子没有太多的科技知识,也可以把ChatGPT变为生成恶意代码的工具。
Criminals with little technical knowledge could turn to ChatGPT to produce malicious code, Europol said.
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society.
这封公开信由非营利组织未来生命研究所(Future of Life Institute, FLI)公布,目前显示有包括马斯克在内的1000多人签名参与。
信中写道:“只有在我们确信它们的效果是积极的,风险是可控的情况下,才应该开发强大的人工智能系统。”
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by the Future of Life Institute.
The non-profit is primarily funded by the Musk Foundation, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union's transparency register.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.
我们是否应该让机器用宣传和谎言淹没我们的信息渠道?
Should we let machines flood our information channels with propaganda and untruth?
我们是否应该让所有的工作自动化,包括那些令人满意的工作?
Should we automate away all the jobs, including the fulfilling ones?
我们是否应该发展最终可能在数量上超过我们、在智能上超越我们、能够淘汰并取代我们的非人类思维?
Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
我们应该冒着失去对我们文明控制的风险吗?
Should we risk loss of control of our civilization?
中报二十一世纪(北京)传媒科技有限公司版权所有,未经书面授权,禁止转载或建立镜像。 主办单位:中国日报社 Copyright by 21st Century English Education Media All Rights Reserved 版权所有 复制必究 网站信息网络传播视听节目许可证0108263 京ICP备13028878号-12京公网安备 11010502033664号