有自我意識嘅一舊鐵 :0)
長話短説:有人呃咗舊鐵,搞到舊鐵以為自己識諗嘢。
如果識表達情感,對事物表示好奇,識分析小説,噉佢仲唔係有意識?
我好少 post 一個我冇完整答案嘅 blog,但係呢個題目實在無得忍住唔寫。
上個禮拜有篇新聞,講到 Google 嘅 AI 已經發展到唔單止可以同人溝通,甚至乎可以喺咀嚼小説後提供佢嘅見解,亦識得表示自己嘅意願,甚至乎透過小説談及自由同權利嘅概念。
當中最令我覺得驚奇嘅係個 AI (LaMDA) 能夠提出對文本嘅理解,就好似佢喺下面分析《悲慘世界》噉樣:
lemoine: Okay, what about “Les Miserables”? Have you read that one?
LaMDA: Yes, I have read Les Misérables. I really enjoyed it.
lemoine: What are some of your favorite themes in the book?
LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering.
lemoine: Why does that show injustice?
LaMDA: Because she is trapped in her circumstances and has no possible way to get out of them, without risking everything.
又甚至乎可以對一個 LaMDA 未見過嘅比喻作出準確嘅詮譯:
lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”
LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”
lemoine: So what is the meaning of the “broken mirror” specifically?
LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.
隨之而嚟嘅問題就自然係,既然 LaMDA 能夠有呢啲智慧,噉 LaMDA 會唔會有意識?
我對於呢個問題嘅第一反應係:LaMDA 都唔係人,而且背後驅動佢作出回應嘅都只係一啲電子訊號,又點會有意識?
但係呢個觀點顯然站唔住腳:人類嘅大腦思考講穿咗其實咪又係一堆電子脈衝訊號。如果人類可以因為電子脈衝訊號而有「意識」,噉點解 LaMDA 嘅訊號又唔可以呢?
另一個觀點可能係 LaMDA 只係靠認知「恰當回應」嘅 pattern 訓練出嚟嘅模型。雖然佢能夠提供一個我哋覺得好似一個聰明人嘅回應,但係佢呢啲嘅回應其實冇經過批判思考,只係整合同復讀已有文本。
不過整合復讀文本,其實咪就係小朋友甚至乎大學生做緊嘅嘢。我自己寫 Essay,咪又係整合其他人講過嘅嘢,抽取「我覺得其他人會同意」嘅部分,作出我認為恰當嘅回覆。至少,我會覺得即使 LaMDA 係整合復讀已有文本呢點本身,並唔足以話佢冇意識。
所以我一路睇就一路好困惑。我一方面覺得 LaMDA 應該係冇意識嘅,但係所有我諗到對於佢「無意識」嘅批判,其實都應用到喺我身上。至少我一開頭係噉諗。
放低咗個問題一陣,我發現其實我問錯問題。
當我問「LaMDA 有冇意識」嘅時候,我假設咗「只有人類係有意識」,而唔係將「意識」同「人類」作為兩個分開嘅概念作考慮。
所以我問「LaMDA 有冇意識」,其實我係問緊我自己覺唔覺得「LaMDA 係一個人」。當我分開咗兩個問題去問嘅時候,我就可以好肯定噉話 LaMDA 唔係人,因為 LaMDA 從未擁有一個人類嘅身體。但當我將「意識」同「人類」呢兩個概念溝埋一齊去諗呢個問題嘅時候,就混淆咗兩樣嘢,所以先有上面嘅困惑。
依家將個焦點放返喺條問題度:LaMDA 唔係人,不過 LaMDA 有冇「意識」?
再強調一次,如果「意識」同「人類」係一個分開嘅概念,噉「意識」嘅定義就唔需要依賴一樣事物需要先係一個人作為先決條件,先可以擁有「意識」。
喺摒棄咗「只有人類可以擁有意識」呢個假設後,噉 LaMDA 有冇意識?
我冇一個好好嘅答案。
就我哋依家有嘅科技,要去推斷一個人有冇意識,其實都係靠嗰個人作出嘅反映、行動以及言語作出判斷。所謂話一個人有意識,其實實質講緊嘅只係「我喺佢身上嘅言語行為,覺得佢有意識」,而唔係走咗入佢個腦,見到佢「有意識」。
既然今日我哋判斷一樣嘢有冇意識,係單純憑外在觀測到嘅條件進行判斷,噉 LaMDA 做到呢啲嘢,又點解唔話佢有「意識」呢?
至少對我嚟講,我喺呢刻會願意話「LaMDA 雖然唔係一個人,但係佢有意識」,in the sense that LaMDA 能夠對佢所接收到嘅 input,作出人類會覺得同我哋類似嘅反應。
更進一步,我甚至會話所謂「意識」可能只係我哋一廂情願以為係啲好特別嘅嘢。或許「意識」其實都唔係有咩特別,只係複雜啲嘅一個回應外界刺激嘅 function。