More than two years ago, a pair of Google researchers started pushing the company to release a chatbot built on technology more powerful than anything else available at the time. The conversational computer program they had developed could confidently debate philosophy and banter about its favorite TV shows, while improvising puns about cows and horses.

兩年多以前,兩名谷歌(Google)研究員開始力促公司發(fā)布一款聊天機(jī)器人,它依托的技術(shù)在當(dāng)時是所向披靡的。他們開發(fā)的那款對話式計算機(jī)程序可以自信地進(jìn)行哲學(xué)辯論,拿它最喜歡的電視節(jié)目打趣,還能即興創(chuàng)作有關(guān)牛和馬的諧音梗。

The researchers, Daniel De Freitas and Noam Shazeer, told colleagues that chatbots like theirs, supercharged by recent advances in artificial intelligence, would revolutionize the way people searched the internet and interacted with computers, according to people who heard the remarks.

據(jù)知情人士說,研究員丹尼爾·迪弗雷塔斯(Daniel De Freitas)和諾姆·沙澤(Noam Shazeer)告訴同事們,像他們開發(fā)的這種由最新人工智能技術(shù)驅(qū)動的聊天機(jī)器人將徹底改變網(wǎng)絡(luò)搜索和人機(jī)交互方式。

They pushed Google to give access to the chatbot to outside researchers, tried to get it integrated into the Google Assistant virtual helper and later asked for Google to make a public demo available.

他們催促谷歌允許外部研究人員使用該聊天機(jī)器人,嘗試將其整合到虛擬助手Google助理(Google Assistant)中,隨后還要求谷歌進(jìn)行公開演示。

Google executives rebuffed them at multiple turns, saying in at least one instance that the program didn’t meet company standards for the safety and fairness of AI systems, the people said. The pair quit in 2021 to start their own company to work on similar technologies, telling colleagues that they had been frustrated they couldn’t get their AI tool at Google out to the public.

他們說,谷歌高管多次回絕了他們,至少有一次說這個項目不符合公司關(guān)于AI系統(tǒng)安全性和公平性的標(biāo)準(zhǔn)。兩人在2021年辭職創(chuàng)業(yè)從事于類似技術(shù),并告訴同事,在谷歌他們因開發(fā)的AI無法面世倍感挫敗。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


Now Google, the company that helped pioneer the modern era of artificial intelligence, finds its cautious approach to that very technology being tested by one of its oldest rivals. Last month Microsoft Corp. announced plans to infuse its Bing search engine with the technology behind the viral chatbot ChatGPT, which has wowed the world with its ability to converse in humanlike fashion. Developed by a seven-year-old startup co-founded by Elon Musk called OpenAI, ChatGPT piggybacked on early AI advances made at Google itself.

谷歌幫助開創(chuàng)了現(xiàn)代人工智能時代,而如今,它對這項技術(shù)的謹(jǐn)慎做法在一位宿敵的挺進(jìn)下面臨考驗。上個月,微軟公司(Microsoft Corp.)宣布了將爆紅的聊天機(jī)器人ChatGPT背后的技術(shù)接入其搜索引擎必應(yīng)(Bing)的計劃,此前ChatGPT已經(jīng)憑借類似人類的對話能力驚艷全球。開發(fā)ChatGPT的是一家由馬斯克(Elon Musk)共同創(chuàng)辦的初創(chuàng)公司。這家成立七年的公司名叫OpenAI,其ChatGPT得益于谷歌較早時候在AI領(lǐng)域取得的進(jìn)步。

Months after ChatGPT’s debut, Google is taking steps toward publicly releasing its own chatbot based in part on technology Mr. De Freitas and Mr. Shazeer worked on. Under the moniker Bard, the chatbot draws on information from the web to answer questions in a conversational format. Google said on Feb. 6 it was testing Bard internally and externally with the aim of releasing it widely in coming weeks. It also said it was looking to build similar technology into some of its search results.

在ChatGPT亮相幾個月后,谷歌正在采取行動,準(zhǔn)備公開發(fā)布部分基于迪弗雷塔斯和沙澤的技術(shù)的自家聊天機(jī)器人。這款名為Bard(譯注:意為吟游詩人)的聊天機(jī)器人借鑒互聯(lián)網(wǎng)上的信息,以對話形式回答問題。谷歌在2月6日表示,它正在對Bard進(jìn)行內(nèi)部和外部測試,旨在于未來幾周內(nèi)大范圍推出。該公司還表示,它正尋求將類似技術(shù)應(yīng)用到一些搜索結(jié)果中。

Google’s relatively cautious approach was shaped by years of controversy over its AI efforts, from internal arguments over bias and accuracy to the public firing last year of a staffer who claimed that its AI had achieved sentience.

谷歌相對謹(jǐn)慎的做法源于多年來其AI努力引發(fā)的爭議,包括公司內(nèi)部關(guān)于偏見和準(zhǔn)確性的爭論,以及去年公開解雇一名聲稱谷歌的AI已經(jīng)有“知覺力”(sentience)的員工。

Those episodes left executives wary of the risks public AI product demos could pose to its reputation and the search-advertising business that delivered most of the nearly $283 billion in revenue last year at its parent company, Alphabet Inc., according to current and former employees and others familiar with the company.

據(jù)現(xiàn)任和前任員工以及其他熟悉該公司的人稱,這些事件讓高管們對公開的AI產(chǎn)品演示可能對其聲譽(yù)和搜索廣告業(yè)務(wù)構(gòu)成的風(fēng)險保持警惕,廣告業(yè)務(wù)在谷歌母公司Alphabet Inc.去年近2,830億美元的收入中占了大頭。

“Google is struggling to find a balance between how much risk to take versus maintaining thought leadership in the world,” said Gaurav Nemade, a former Google product manager who worked on the company’s chatbot until 2020.

“谷歌正艱難地在承擔(dān)風(fēng)險與維持世界思想領(lǐng)袖地位之間尋找一個平衡點,”曾在谷歌從事聊天機(jī)器人產(chǎn)品經(jīng)理工作直到2020年的高拉夫·內(nèi)瑪?shù)?Gaurav Nemade)說。

Messrs. De Freitas and Shazeer declined requests for an interview through an external representative.

迪弗雷塔斯和沙澤通過一名外部代表回絕了采訪請求。

A Google spokesman said their work was interesting at the time, but that there is a big gap between a research prototype and a reliable product that is safe for people to use daily. The company added that it has to be more thoughtful than smaller startups about releasing AI technologies.

谷歌的一位發(fā)言人說,他們的成果在當(dāng)時來說是有趣的,但研究原型與可供人們?nèi)粘0踩褂玫目煽慨a(chǎn)品之間存在巨大差距。該公司補(bǔ)充說,在發(fā)布AI技術(shù)方面,它必須比規(guī)模更小的初創(chuàng)公司考慮得更周全。

Google’s approach could prove to be prudent. Microsoft said in February it would put new limits on its chatbot after users reported inaccurate answers, and sometimes unhinged responses when pushing the app to its limits.

谷歌的做法可能被證明是審慎的。微軟在2月份表示將對其聊天機(jī)器人加以新限制,此前用戶報告收到了不準(zhǔn)確的回答,以及該應(yīng)用有時會在被推向極限后產(chǎn)生奇怪的反應(yīng)。
Alphabet及其子公司谷歌的CEO皮查伊告訴員工,該公司一些最成功的產(chǎn)品是隨著時間推移贏得了用戶的信任。
圖片來源:KYLE GRILLOT/BLOOMBERG NEWS

In an email to Google employees last month, Sundar Pichai, chief executive of both Google and Alphabet, said some of the company’s most successful products weren’t the first to market but earned user trust over time.

上個月,谷歌和Alphabet的首席執(zhí)行官桑達(dá)爾·皮查伊(Sundar Pichai)在一封給谷歌員工的電子郵件中說,該公司部分最成功的產(chǎn)品并不是率先上市的,而是隨著時間的推移贏得了用戶的信任。

“This will be a long journey—for everyone, across the field,” Mr. Pichai wrote. “The most important thing we can do right now is to focus on building a great product and developing it responsibly.”

“這將是一段漫長的旅程——對每一個人,對整個領(lǐng)域來說都是,”皮查伊寫道?!拔覀儸F(xiàn)在能做的最重要的事情是專注于打造一個偉大的產(chǎn)品,并負(fù)責(zé)任地開發(fā)它。”

Google’s chatbot efforts go as far back as 2013, when Google co-founder Larry Page, then CEO, hired Ray Kurzweil, a computer scientist who helped popularize the idea that machines would one day surpass human intelligence, a concept known as “technological singularity.”

谷歌的聊天機(jī)器人開發(fā)可以追溯到2013年,當(dāng)時谷歌聯(lián)合創(chuàng)始人兼時任首席執(zhí)行官拉里·佩奇(Larry Page)聘請了計算機(jī)科學(xué)家雷·庫茲韋爾(Ray Kurzweil),他讓機(jī)器有一天會超過人類智能的想法廣為傳播,即所謂“技術(shù)奇點”的概念。

Mr. Kurzweil began working on multiple chatbots, including one named Danielle based on a novel he was working on at the time, he said later. Mr. Kurzweil declined an interview request through a spokeswoman for Kurzweil Technologies Inc., a software company he started before joining Google.

他后來說,庫茲韋爾開始著手開發(fā)多個聊天機(jī)器人,其中一個是基于他當(dāng)時正在創(chuàng)作的小說命名的Danielle。庫茲韋爾通過庫茲韋爾技術(shù)公司(Kurzweil Technologies Inc.)的一名發(fā)言人拒絕了采訪請求,該公司是他在加入谷歌之前創(chuàng)辦的一家軟件公司。

Google also purchased the British artificial-intelligence company DeepMind, which had a similar mission of creating artificial general intelligence, or software that could mirror human mental capabilities.

谷歌還收購了英國人工智能公司DeepMind,該公司有一個類似的使命,即創(chuàng)造通用人工智能(artificial general intelligence),或可以映射人類心理能力的軟件。

At the same time, academics and technologists increasingly raised concerns about AI—such as its potential for enabling mass surveillance via facial-recognition software—and pressured companies such as Google to commit not to pursue certain uses of the technology.

與此同時,學(xué)術(shù)界和技術(shù)專家越來越多地提出了對AI的擔(dān)憂——例如其通過面部識別軟件實現(xiàn)大規(guī)模監(jiān)控的潛力——并向谷歌等公司施壓,要求他們承諾不將該技術(shù)用于某些用途。

Partly in response to Google’s growing stature in the field, a group of tech entrepreneurs and investors including Mr. Musk formed OpenAI in 2015. Initially structured as a nonprofit, OpenAI said it wanted to make sure AI didn’t fall prey to corporate interests and was instead used for the good of humanity. (Mr. Musk left OpenAI’s board in 2018.)

包括馬斯克在內(nèi)的一群科技企業(yè)家和投資者于2015年成立了OpenAI,他們此舉部分出于抗衡谷歌在該領(lǐng)域日益提升的地位。OpenAI成立之初是一個非營利組織,它稱其想確保AI不會淪為企業(yè)牟利的工具,而是致力于造福人類。(馬斯克于2018年離開了OpenAI的董事會。)

Google eventually promised in 2018 not to use its AI technology in military weapons, following an employee backlash against the company’s work on a U.S. Department of Defense contract called Project Maven that involved automatically identifying and tracking potential drone targets, like cars, using AI. Google dropped the project.

谷歌最終在2018年承諾不將其AI技術(shù)用于軍事武器,此前員工強(qiáng)烈反對該公司與美國國防部簽訂的一項名為Project Maven的項目,涉及利用AI讓無人機(jī)自動識別和跟蹤潛在目標(biāo),比如汽車。谷歌后來放棄了這個項目。

Mr. Pichai also announced a set of seven AI principles to guide the company’s work, designed to limit the spread of unfairly biased technologies, such as that AI tools should be accountable to people and “built and tested for safety.”

皮查伊還公布了一套指導(dǎo)該公司工作的AI七大原則,旨在限制帶有不公平偏見的技術(shù)傳播,例如AI工具應(yīng)該對人負(fù)責(zé),“要基于安全進(jìn)行開發(fā)和測試。”
沙澤和迪弗雷塔斯在他們新公司位于帕羅奧圖的辦公室。
圖片來源:WINNI WINTERMEYER FOR THE WASHINGTON POST/GETTY IMAGES

Around that time, Mr. De Freitas, a Brazilian-born engineer working on Google’s YouTube video platform, started an AI side project.

差不多那個時候,出生于巴西、在谷歌的YouTube視頻平臺工作的工程師迪弗雷塔斯開始了一個AI副業(yè)。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


As a child, Mr. De Freitas dreamed of working on computer systems that could produce convincing dialogue, his fellow researcher Mr. Shazeer said during a video interview uploaded to YouTube in January. At Google, Mr. De Freitas set out to build a chatbot that could mimic human conversation more closely than any previous attempts.

沙澤在一則于1月上傳到Y(jié)ouTube的采訪視頻中說,迪弗雷塔斯小時候就夢想著研究能夠進(jìn)行令人信服的對話的計算機(jī)系統(tǒng)。在谷歌,迪弗雷塔斯著手開發(fā)了一個聊天機(jī)器人,它比此前的嘗試都更接近地模仿了人類對話。

For years the project, originally named Meena, remained under wraps while Mr. De Freitas and other Google researchers fine-tuned its responses. Internally, some employees worried about the risks of such programs after Microsoft was forced in 2016 to end the public release of a chatbot called Tay after users goaded it into problematic responses, such as support for Adolf Hitler.

多年來,這個最初名為Meena的項目一直處于保密狀態(tài),迪弗雷塔斯和其他谷歌研究人員對其反應(yīng)進(jìn)行著調(diào)整。在內(nèi)部,一些員工擔(dān)心此類項目的風(fēng)險,此前微軟在2016年被迫終止了一款名為Tay的聊天機(jī)器人的公開發(fā)布,因為用戶誘導(dǎo)它做出了有問題的回應(yīng),比如支持希特勒。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


The first outside glimpse of Meena came in 2020, in a Google research paper that said the chatbot had been fed 40 billion words from social-media conversations in the public domain.

外界第一次知道Meena是在2020年,谷歌的一篇研究論文說已經(jīng)給這個聊天機(jī)器人投喂了來自公網(wǎng)社交媒體對話的400億字。

OpenAI had developed a similar model, GPT-2, based on 8 million webpages. It released a version to researchers but initially held off on making the program publicly available, saying it was concerned it could be used to generate massive amounts of deceptive, biased or abusive language.

OpenAI開發(fā)了一個類似的模型——基于800萬個網(wǎng)頁的GPT-2。該公司向研究人員發(fā)布了一個版本,但最初沒有公開這個程序,說是擔(dān)心它可能被用來產(chǎn)生大量帶有欺騙性、偏見或侮辱性的語言。

At Google, the team behind Meena also wanted to release their tool, even if only in a limited format as OpenAI had done. Google leadership rejected the proposal on the grounds that the chatbot didn’t meet the company’s AI principles around safety and fairness, said Mr. Nemade, the former Google product manager.

在谷歌,Meena背后的團(tuán)隊也想發(fā)布他們的工具,即使只是像OpenAI那樣以一種有限的形式發(fā)布。前谷歌產(chǎn)品經(jīng)理內(nèi)瑪?shù)抡f,谷歌領(lǐng)導(dǎo)層拒絕了這一提議,理由是該聊天機(jī)器人不符合公司關(guān)于安全性和公平性的AI原則。

A Google spokesman said the chatbot had been through many reviews and barred from wider releases for various reasons over the years.

一位谷歌發(fā)言人稱,這個聊天機(jī)器人經(jīng)歷了許多審核,多年來出于種種原因被禁止廣泛發(fā)布。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


The team continued working on the chatbot. Mr. Shazeer, a longtime software engineer at the AI research unit Google Brain, joined the project, which they renamed LaMDA, for Language Model for Dialogue Applications. They injected it with more data and computing power. Mr. Shazeer had helped develop the Transformer, a widely heralded new type of AI model that made it easier to build increasingly powerful programs like the ones behind ChatGPT.

該團(tuán)隊繼續(xù)從事開發(fā)聊天機(jī)器人的工作。長期在AI研究部門Google Brain擔(dān)任軟件工程師的沙澤加入了這個項目,他們將其更名為LaMDA(譯注:Language Model for Dialogue Applications的簡稱,意為對話應(yīng)用語言模型)。他們給它注入了更多的數(shù)據(jù)和算力。沙澤曾幫助開發(fā)了“轉(zhuǎn)換器”(transformer),這是一種廣受贊譽(yù)的新型AI模型,它使得開發(fā)像ChatGPT背后那樣日益強(qiáng)大的程序更加容易。

However, the technology behind their work soon led to a public dispute. Timnit Gebru, a prominent AI ethics researcher at Google, said in late 2020 she was fired for refusing to retract a research paper on the risks inherent in programs like LaMDA and then complaining about it in an email to colleagues. Google said she wasn’t fired and claimed her research was insufficiently rigorous.

然而,他們的開發(fā)所使用的技術(shù)很快就引發(fā)了一場公開爭論。谷歌知名AI倫理學(xué)研究員蒂姆尼特·格布魯(Timnit Gebru)在2020年底表示,她因在拒絕撤回一篇關(guān)于LaMDA等程序內(nèi)在風(fēng)險的研究論文后在給同事的郵件中抱怨而被解雇。谷歌說她沒有被解雇,并稱她的研究不夠嚴(yán)謹(jǐn)。
2021年的一個谷歌虛擬大會展示了與LaMDA的對話示例。
圖片來源:DANIEL ACKER/BLOOMBERG NEWS

Google’s head of research, Jeff Dean, took pains to show Google remained invested in responsible AI development. The company promised in May 2021 to double the size of the AI ethics group.

谷歌研究主管杰夫·迪恩(Jeff Dean)努力證明,谷歌仍然投資于負(fù)責(zé)任的AI開發(fā)。該公司在2021年5月承諾將AI倫理小組的規(guī)模擴(kuò)大一倍。

A week after the vow, Mr. Pichai took the stage at the company’s flagship annual conference and demonstrated two prerecorded conversations with LaMDA, which, on command, responded to questions as if it were the dwarf planet Pluto or a paper airplane.

谷歌發(fā)出上述承諾一周后,皮查伊在該公司的旗艦?zāi)甓却髸险故玖藘啥晤A(yù)先錄制的與LaMDA的對話,LaMDA假裝自己是冥王星或一架紙飛機(jī),根據(jù)命令對問題作出回應(yīng)。

Google researchers prepared the examples days before the conference following a last-minute demonstration delivered to Mr. Pichai, said people briefed on the matter. The company emphasized its efforts to make the chatbot more accurate and minimize the chance it could be misused.

知情人士透露,谷歌研究人員在大會前幾天準(zhǔn)備了這些例子,并在最后一刻向皮查伊提供了演示。該公司強(qiáng)調(diào),它努力使聊天機(jī)器人更加準(zhǔn)確,并盡量減少其被濫用的機(jī)會。

“Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks,” two Google vice presidents said in a blog post at the time.

“在創(chuàng)造像LaMDA這樣的技術(shù)時,我們最優(yōu)先考慮的是努力確保將這些風(fēng)險降到最低,”谷歌的兩位副總裁在當(dāng)時的一篇博客文章中說。
谷歌研究主管迪恩在2020年一場在舊金山的活動上發(fā)表講話。
圖片來源:MONICA M DAVEY/SHUTTERSTOCK

原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


Google later considered releasing a version of LaMDA at its flagship conference in May 2022, said Blake Lemoine, an engineer the company fired last year after he published conversations with the chatbot and claimed it was sentient. The company decided against the release after Mr. Lemoine’s conclusions began generating controversy internally, he said. Google has said Mr. Lemoine’s concerns lacked merit and that his public disclosures violated employment and data-security policies.

因為公布了與聊天機(jī)器人的對話,并聲稱它是有意識的而在去年遭谷歌解雇的工程師布萊克·萊莫因(Blake Lemoine)說,該公司本考慮在2022年5月的旗艦大會上發(fā)布LaMDA的一個版本。萊莫因說,在他的結(jié)論開始在內(nèi)部引起爭議后,公司決定不發(fā)布。谷歌表示,萊莫因的擔(dān)憂缺乏依據(jù),而他的公開披露違反了公司的雇用和數(shù)據(jù)安全政策。

As far back as 2020, Mr. De Freitas and Mr. Shazeer also looked for ways to integrate LaMDA into Google Assistant, a software application the company had debuted four years earlier on its Pixel smartphones and home speaker systems, said people familiar with the efforts. More than 500 million people were using Assistant every month to perform basic tasks such as checking the weather and scheduling appointments.

知情人士說,早在2020年,迪弗雷塔斯和沙澤還在尋找將LaMDA整合到谷歌助理中的方法;谷歌助理是該公司四年前首先在其Pixel智能手機(jī)和家用音響系統(tǒng)上推出的應(yīng)用。每個月有超過5億人使用谷歌助理來完成基本任務(wù),如查天氣和安排日程。

The team overseeing Assistant began conducting experiments using LaMDA to answer user questions, said people familiar with the efforts. However, Google executives stopped short of making the chatbot available as a public demo, the people said.

知情人士說,負(fù)責(zé)谷歌助理的團(tuán)隊開始試驗使用LaMDA來回答用戶的問題。然而,他們說,谷歌高管沒有公開演示這個聊天機(jī)器人。

Google’s reluctance to release LaMDA to the public frustrated Mr. De Freitas and Mr. Shazeer, who took steps to leave the company and begin working on a startup using similar technology, the people said.

他們說,谷歌不愿意向公眾發(fā)布LaMDA,這讓迪弗雷塔斯和沙澤感到沮喪,他們離開了公司,開始進(jìn)行類似技術(shù)的創(chuàng)業(yè)。

Mr. Pichai personally intervened, asking the pair to stay and continue working on LaMDA but without making a promise to release the chatbot to the public, the people said. Mr. De Freitas and Mr. Shazeer left Google in late 2021 and incorporated their new startup, Character Technologies Inc., in November that year.

他們還說,皮查伊親自出面干預(yù),要求兩人留下并繼續(xù)開發(fā)LaMDA,但沒有做出向公眾發(fā)布該聊天機(jī)器人的承諾。迪弗雷塔斯和沙澤于2021年底離開谷歌,并于當(dāng)年11月成立了他們的新公司Character Technologies Inc.。

Character’s software, released last year, allows users to create and interact with chatbots that role-play as well-known figures such as Socrates or stock types such as psychologists.

Character公司去年發(fā)布的軟件允許用戶創(chuàng)建聊天機(jī)器人并與之互動,聊天機(jī)器人可以扮演蘇格拉底等知名人物或心理醫(yī)生這樣的內(nèi)置人設(shè)。

“It caused a bit of a stir inside of Google,” Mr. Shazeer said in the interview uploaded to YouTube, without elaborating, “but eventually we decided we’d probably have more luck launching stuff as a startup.”

“這在谷歌內(nèi)部引發(fā)了一些騷動,”沙澤在上傳到Y(jié)ouTube的采訪說,但沒有詳細(xì)解釋,“但最終我們決定,我們成立一家初創(chuàng)公司來推出東西可能更走運(yùn)?!?/b>

Since Microsoft struck its new deal with OpenAI, Google has fought to reassert its identity as an AI innovator.

自從微軟與OpenAI達(dá)成新協(xié)議以來,谷歌一直在努力重新確立其作為AI創(chuàng)新者的身份。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


Google announced Bard in February, on the eve of a Microsoft event introducing Bing’s integration of OpenAI technology. Two days later, at an event in Paris that Google said was originally scheduled to discuss more regional search features, the company gave press and the broader public another glimpse of Bard, as well as a search tool that used AI technology similar to LaMDA to generate textual responses to search queries.

2月,就在微軟宣布將把必應(yīng)與OpenAI的技術(shù)進(jìn)行整合的前夕,谷歌公布了Bard。兩天后,在巴黎舉行的一次原定會更多地討論區(qū)域性搜索功能的活動上,谷歌讓媒體和公眾進(jìn)一步了解了Bard,以及一個使用類似于LaMDA的AI技術(shù)對搜索請求生成文本回復(fù)的搜索工具。

Google said that it often reassesses the conditions to release products and that because there is a lot of excitement now, it wanted to release Bard to testers even if it wasn’t perfect.

谷歌表示,它經(jīng)常重新評估發(fā)布產(chǎn)品的條件,由于現(xiàn)在有很多興奮情緒,它想面向測試人員發(fā)布Bard,盡管其并不完美。
上個月,微軟首席執(zhí)行官納德拉在該公司位于其華盛頓州雷德蒙德總部的一場活動上發(fā)表演講。
圖片來源:CHONA KASINGER/BLOOMBERG NEWS
微軟員工Alexander Campbell在展示微軟必應(yīng)搜索引擎和Edge瀏覽器與OpenAI技術(shù)的整合成果。
圖片來源:STEPHEN BRASHEAR/ASSOCIATED PRESS

Since early last year, Google has also had internal demonstrations of search products that integrate responses from generative AI tools like LaMDA, Elizabeth Reid, the company’s vice president of search, said in an interview.

谷歌負(fù)責(zé)搜索業(yè)務(wù)的副總裁伊麗莎白·里德(Elizabeth Reid)在接受采訪時說,自去年年初以來,谷歌也對整合了LaMda等生成式人工智能(generative AI)工具的搜索產(chǎn)品進(jìn)行了內(nèi)部演示。

One use case for search where the company sees generative AI as most useful is for specific types of queries with no one right answer, which the company calls NORA, where the traditional blue Google lix might not satisfy the user. Ms. Reid said the company also sees potential search use cases for other types of complex queries, such as solving math problems.

該公司認(rèn)為生成式人工智能最有用的一個搜索用例是回應(yīng)沒有一個正確答案的特定類型查詢,該公司稱之為NORA(譯注:no one right answer的縮寫),傳統(tǒng)的谷歌搜索結(jié)果可能在這種情況下無法滿足用戶。里德說,該公司還看到了其他類型的復(fù)雜查詢的潛在搜索用例,如解決數(shù)學(xué)問題。

As with many similar programs, accuracy remained an issue, executives said. Such models have a tendency to invent a response when they don’t have sufficient information, something researchers call “hallucination.” Tools built on LaMDA technology have in some cases responded with fictional restaurants or off-topic responses when asked for recommendations, said people who have used the tool.

高管們說,與許多類似的程序一樣,準(zhǔn)確性仍然是一個問題。這類模型在沒有足夠信息的情況下有發(fā)明回應(yīng)的傾向,研究人員稱之為“幻覺”(hallucination)。使用過LaMDA技術(shù)的人說,在某些情況下,建立在該技術(shù)上的工具當(dāng)被要求做出推薦時,會給出虛構(gòu)的餐館或離題的反應(yīng)。

Microsoft called the new version of Bing a work in progress last month after some users reported disturbing conversations with the chatbot integrated into the search engine, and introduced changes, such as limiting the length of chats, aimed at reducing the chances the bot would spout aggressive or creepy responses. Both Google and Microsoft’s previews of their bots in February included factual inaccuracies produced by the programs.

由于一些用戶報告了與集成在搜索引擎中的聊天機(jī)器人進(jìn)行的令人不安的對話,微軟在上個月稱新版必應(yīng)是一款開發(fā)中的產(chǎn)品,并引入了一些措施,如限制聊天的長度,旨在減少機(jī)器人產(chǎn)生具有攻擊性或怪異反應(yīng)的機(jī)會。在2月份的預(yù)覽中,谷歌和微軟的機(jī)器人均出現(xiàn)了所給事實信息不準(zhǔn)確的情況。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


“It’s sort of a little bit like talking to a kid,” Ms. Reid said of language models like LaMDA. “If the kid thinks they need to give you an answer and they don’t have an answer, then they’ll make up an answer that sounds plausible.”

“這有點像和孩子說話。”里德在談到LaMDA等語言模型時說,“如果孩子認(rèn)為他們需要給你一個答案,而他們沒有答案,那么他們就會編造一個聽起來合理的答案?!?/b>

Google continues to fine tune its models, including training them to know when to profess ignorance instead of making up answers, Ms. Reid said. The company added that it has improved LaMDA’s performance on metrics like safety and accuracy over the years.

里德說,谷歌繼續(xù)對其模型進(jìn)行調(diào)整,包括訓(xùn)練它們知道在什么時候應(yīng)該承認(rèn)無知而不是編造答案。該公司補(bǔ)充說,多年來它已經(jīng)提高了LaMDA在安全性和準(zhǔn)確性等指標(biāo)上的表現(xiàn)。
原創(chuàng)翻譯:龍騰網(wǎng) http://nxnpts.cn 轉(zhuǎn)載請注明出處


Integrating programs like LaMDA, which can synthesize millions of websites into a single paragraph of text, could also exacerbate Google’s long-running feuds with major news outlets and other online publishers by starving websites of traffic. Inside Google, executives have said Google must deploy generative AI in results in a way that doesn’t upset website owners, in part by including source lixs, according to a person familiar with the matter.

整合像LaMDA這樣可以將數(shù)百萬個網(wǎng)站的信息匯總成一段文字的程序也可能使網(wǎng)站失去流量,加劇谷歌與主要新聞機(jī)構(gòu)和其他在線出版商的長期爭執(zhí)。據(jù)一位知情人士稱,在谷歌內(nèi)部,高管們已經(jīng)表示,谷歌在搜索結(jié)果中部署生成式人工智能時必須通過不惹惱網(wǎng)站所有者的方式,例如在結(jié)果中包含來源鏈接。

“We’ve been very careful to take care of the ecosystem concerns,” said Prabhakar Raghavan, the Google senior vice president overseeing the search engine, during the event in February. “And that’s a concern that we intend to be very focused on.”

“我們一直非常小心地考慮到整個生態(tài)系統(tǒng)的顧慮,”谷歌負(fù)責(zé)搜索引擎的高級副總裁普拉巴卡·拉加萬(Prabhakar Raghavan)在2月的一場活動中說?!岸@也是我們打算高度關(guān)注的一個問題。”