欢迎访问贝贝英语网!

VOA慢速英语听力:AI聊天机器人能否学会更加真实?Can AI Chatbots Learn to Be More Truthful?

2023-08-03 15:16

VOA慢速英语听力:AI聊天机器人能否学会更加真实?Can AI Chatbots Learn to Be More Truthful?

Newly-developed artificial intelligence (AI) systems have demonstrated the ability to perform at human-like levels in some areas. But one serious problem with the tools remains – they can produce false or harmful information repeatedly.

新开发的人工智能(AI)系统在某些领域已经展现出与人类水平相当的表现。但是,这些工具仍然存在一个严重的问题——它们会反复产生虚假或有害信息。

The development of such systems, known as “chatbots,” has progressed greatly in recent months. Chatbots have shown the ability to interact smoothly with humans and produce complex writing based on short, written commands. Such tools are also known as “generative AI” or “large language models.”

这类被称为“聊天机器人”的系统最近几个月取得了巨大进展。聊天机器人展现了与人类平滑互动的能力,并能根据简短的书面指令生成复杂的文字。这些工具也被称为“生成式AI”或“大型语言模型”。

Chatbots are one of many different AI systems currently under development. Others include tools that can produce new images, video and music or can write computer programs. As the technology continues to progress, some experts worry that AI tools may never be able to learn how to avoid false, outdated or damaging results.

聊天机器人只是目前正在开发的众多不同AI系统之一。其他系统包括能够生成新图片、视频和音乐或编写计算机程序的工具。随着技术的不断进步,一些专家担心AI工具可能永远无法学会避免虚假、过时或有害的结果。

The term hallucination has been used to describe when chatbots produce inaccurate or false information. Generally, hallucination describes something that is created in a person’s mind, but is not happening in real life.

“幻觉”一词被用来描述聊天机器人产生不准确或虚假信息的情况。通常,幻觉描述的是一个人头脑中产生的事物,但并没有在现实生活中发生。

Daniela Amodei is co-creator and president of Anthropic, a company that produced a chatbot called Claude 2. She told the Associated Press, "I don’t think that there’s any model today that doesn’t suffer from some hallucination.”

Daniela Amodei是Anthropic公司的联合创始人兼总裁,该公司开发了一款名为Claude 2的聊天机器人。她告诉美联社:“我认为现在没有任何模型不会出现一些幻觉。”

Amodei added that such tools are largely built “to predict the next word.” With this kind of design, she said, there will always be times when the model gets information or context wrong.

Amodei补充说,这类工具主要是“用来预测下一个词”。按照这种设计,她说,模型总会在某些情况下获取信息或上下文出现错误。

Anthropic, ChatGPT-maker OpenAI and other major developers of such AI systems say they are working to make AI tools that make fewer mistakes. Some experts question how long that process will take or if success is even possible.

Anthropic、ChatGPT的制造商OpenAI和其他主要开发此类AI系统的公司表示,他们正在努力使AI工具减少错误。一些专家质疑这个过程需要多长时间,或者是否可能成功。

“This isn’t fixable,” says Professor Emily Bender. She is a language expert and director of the University of Washington’s Computational Linguistics Laboratory. Bender told the AP she considers the general relationship between AI tools and proposed uses of the technology a “mismatch.”

“这是无法修复的,”埃米莉·本德教授说道。她是语言专家,也是华盛顿大学计算语言学实验室的主任。本德告诉美联社,她认为AI工具与技术的提议之间存在“不匹配”。

Indian computer scientist Ganesh Bagler has been working for years to get AI systems to create recipes for South Asian foods. He said a chatbot can generate misinformation in the food industry that could hurt a food business. A single “hallucinated” recipe element could be the difference between a tasty meal or a terrible one.

印度计算机科学家甘尼什·巴格勒多年来一直致力于让AI系统创作南亚食品的食谱。他说,聊天机器人在食品行业可能会产生错误信息,从而损害食品企业。一项“产生幻觉”的食谱元素可能成为一顿美味或可怕的餐食之间的区别。

Bagler questioned OpenAI chief Sam Altman during an event on AI technology held in India in June. “I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem,” Bagler said.

巴格勒在六月份印度举行的一次关于AI技术的活动上质疑了OpenAI首席执行官萨姆·阿尔特曼。“我想,在ChatGPT中产生幻觉仍然可以接受,但是当食谱产生幻觉时,就成了一个严重的问题,”巴格勒说。

Altman answered by saying he was sure developers of AI chatbots would be able to get “the hallucination problem to a much, much better place" in the future. But he noted such progress could take years. “At that point we won’t still talk about these,” Altman said. “There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.”

阿尔特曼回答说,他确信AI聊天机器人的开发者将能够在未来解决“幻觉问题”,但他指出这样的进展可能需要数年时间。“到那时,我们将不再谈论这些,”阿尔特曼说。“在创造力和完美准确性之间存在平衡,模型需要学会在你需要哪一个时选择哪一个。”

Other experts who have long studied the technology say they do not expect such improvements to happen anytime soon.

其他长期研究这项技术的专家表示,他们不指望这样的改进会很快发生。

The University of Washington’s Bender describes a language model as a system that has been trained on written data to “model the likelihood of different strings of word forms.” Many people depend on a version of this technology whenever they use the “autocomplete” tool when writing text messages or emails.

华盛顿大学的本德将语言模型描述为一个通过训练书面数据来“模拟不同词形串的可能性”的系统。许多人在撰写短信或邮件时使用这项技术的版本,以便使用“自动完成”工具。

The latest chatbot tools try to take that method to the next level, by generating whole new passages of text. But Bender says the systems are still just repeatedly choosing the most predictable next word in a series. Such language models "are designed to make things up. That’s all they do,” she noted.

最新的聊天机器人工具试图将这一方法推向更高水平,通过生成全新的文本段落。但本德表示,这些系统仍然只是反复选择最可预测的下一个词。她指出,这样的语言模型“被设计来编造事物。那就是它们所做的一切。”

Some businesses, however, are not so worried about the ways current chatbot tools generate their results. Shane Orlick is head of marketing technology company Jasper AI. He told the AP, “Hallucinations are actually an added bonus." He explained many chatbot users were pleased that the company’s AI tool had "created takes on stories or angles that they would have never thought of themselves.”

然而,一些企业对目前聊天机器人工具生成结果的方式并不担心。Shane Orlick是营销技术公司Jasper AI的负责人。他告诉美联社:“幻觉实际上是一个额外的好处。”他解释说,许多聊天机器人用户很高兴公司的AI工具“创造了他们从未想过的故事或角度”。

I’m Bryan Lynn.

我是布莱恩·林恩。

Words in This Story

generate – v. to produce something

inaccurate – adj. not correct or exact

context – n. all the facts, opinions, situations, etc. relating to a particular thing or event

mismatch – n. a situation when people or things are put together but are not suitable for each other

recipe – n. a set of instructions and ingredients for preparing a particular food dish

bonus – n. a pleasant extra thing

angle – n. a position from which something is looked at

相关