New to Nutbox?

人工智能到底有没有意识?这个问题的答案也许我想明白了

1 comment

cheva
76
11 months ago5 min read

虽然ChatGPT的热潮有些退烧了,但是人工智能的话题还是很让人着迷,又看了一期《新石器公园》,还有其他的一些非常棒的科普视频,对这个问题的理解又深入了一点点,下面就谈一谈吧。

以ChatGPT为代表的人工智能,显然在很多方面已经超过人类了,可以称得上是智能体了,当然现在达到这个水平的也就只有OpenAI少数几家公司。顺便吐槽一下谷歌,谷歌的聊天机器人Bard,最近也用了一下,感觉使用体验和免费使用的ChatGPT3.5都有很大的差距,无论是解决问题的能力,还是联系上下文的能力。

人工智能模型聪明是聪明了,但是人们真正担心的是机器具备了和人一样的意识。但是意识是什么?它的背后是什么机制?人类也没有搞清楚,所以这个问题似乎不是很好回答。不过在看了很多有些烧脑的科普视频之后,我觉得这个问题当下也不是完全无法回答的。简单来说,以现在的人工智能模型的训练和运行方式来说,可以基本肯定他们是没有自我意识的。首先,虽然我们不知道意识到底是什么,但是有一些基本性质,我们还是能够描述一下。说通俗一点的话,意识就是对自我存在的感知,而这个感知必须是一个连续的过程,它建立在我们的记忆的基础之上,并且记忆会随着当下的体验而延伸,我们的经历和经验塑造了我们的世界观,我们的感官能够不停地从外界环境获取信息,并修改这个世界观的模型。这差不多就是意识的本质了,它包括两个部分,一个是记忆,一个是能够不断修改的关于这个世界观的模型。本质上是以我们大脑中神经元的连接方式储存下来,所以这就是意识的基本原理了,它包括两部分,一个是一部分是我们的记忆,一部分是能够通过当下获取的信息,不断地更新大脑中神经元的连接。

但是像Chat GPT这样的大语言模型是什么样的方式呢?现在的大语言模型基本上都经过了两个阶段,第一阶段是训练,这一阶段要消耗相当大的能源和算力,训练完成之后,我们就获得了一个包含数以亿计的参数的模型,而这些,模型中的参数就相当于人类大脑中的神经元,像一些现在可以在PC机本地运行的大语言模型的参数,都在几十亿到上百亿之间,而Chat GPT这样的商用模型必须在专用的服务器上才能运行,参数量更是高达数千亿,这在数量级方面已经可以媲美人类的大脑,而且他们的原理也比较相似。但为什么说他一定不会产生自我意识呢,因为他不符合意识的第二个部分,就是实时的通过外界信息的刺激,实时的修改参数。而我们人类却是可以的,人类的大脑即使在进入老年之后,也还是会有少量的新的神经元细胞诞生,而以前的脑的神经细胞也会重新连接,形成新的通路,这就使我们人类具有了不断学习的能力,也就是不断的更新我们头脑中的那个世界观模型。而现在的AI大语言模型,因为它的训练需要的成本非常的高昂,所以在训练好了之后,一般就不会对参数再做什么修改,这也是为什么Chad GPT会明白无误的告诉你,他的资料只截止于2021年,也就是说在训练完成的那一刻,他的记忆就冻结了,永远的停留在了2021。所以从这个严格意义上来说,人工智能虽然很聪明,但别说有自我意识了,就其本质而言,它甚至都不是活的。

所以再看网上那些之前很吸引眼球的研究,说什么Chat GPT具有心智,GPT-4有自利的行为,想逃脱人类监管之类的研究,也就当做一个笑话和八卦吧!

但是随着AI应用范围越来越广,还是有一些问题的,并不是非要具备自我意识,才会对人类产生威胁,才会让AI产生要奴役人类的想法。正是因为没有意识,有时候反而可能更加危险,因为这样的AI,它有可能会为了忠实于使用者赋予它的任务,而不计后果的去执行。有一个科幻小说,说是一个人给他的AI下了一个指令,要他写出世界上最漂亮的字,那个时候的AI已经先进到,可以自己控制和使用工具操作互联网,并且还可以自我进化和修改,于是为了完成主人赋予他的任务,他几乎买空了世界上所有的铅笔和纸张,但他自己的作品还是不满意,于是他就自己设计出机器,生产铅笔和纸张用来练习,最后他消耗掉了世界上所有的森林,但他对自己的作品还是不满意,于是他又设计宇宙飞船,探索太空,去其他的星球来开发木材,生产纸张和铅笔。这个故事虽然听上去很荒诞,但是对一个没有自我意识,而又功能强大,一切只为一个人为目的而行动的AI来说是完全可能的。


Although the ChatGPT craze has died down a bit, the topic of artificial intelligence is still fascinating, and with another episode of Neolithic Park and some other great popular science videos, I understand it a little bit more. Let's talk about it below.

Ai, such as ChatGPT, has clearly surpassed human beings in many ways and can be called intelligent agents, although only a few companies have reached this level. And just to poke fun at Google, Bard, Google's chatbot, has recently been used and feels that the experience is far from that of the free ChatGPT3.5, both in terms of problem solving and context-related ability.

Ai models are smart, but what people are really worried about is machines becoming as conscious as humans. But what is consciousness? What is the mechanism behind it? Humans haven't figured it out yet, so it doesn't seem like an easy question to answer. But after watching a lot of mind-boggling science videos, I think it's not a completely unanswerable question right now. In short, the way current AI models are trained and run, they are almost certainly not self-aware. First of all, although we don't know exactly what consciousness is, there are some basic properties that we can describe. In layman's terms, consciousness is the perception of self existence, and this perception must be a continuous process, it is based on our memory, and memory is extended with the experience of the present, our experiences and experiences shape our world view, our senses are constantly taking information from the external environment, and modify the model of the world view. That's pretty much the essence of consciousness, which consists of two parts, a memory, and a model of the world view that can be constantly modified. It's essentially stored in the same way that neurons connect in our brains, so that's the basic principle of consciousness, which has two parts, one is our memory, the other is the information that we can access in the moment, constantly updating the neuronal connections in the brain.

But what about a large language model like Chat GPT? Today's large language models basically go through two stages. The first stage is training, which takes a lot of energy and computing power, and when the training is done, we have a model with billions of parameters, and these parameters in the model are equivalent to neurons in the human brain. Some of the big language models that now run locally on PCS have parameters in the billions to tens of billions, while commercial models like Chat GPT, which must run on dedicated servers, have parameters in the hundreds of billions, which is on a par with the human brain in terms of magnitude, and their principles are similar. But why wouldn't he necessarily be self-aware? Because he doesn't fit into the second part of consciousness, which is the real time, the real time modification of parameters through the stimulation of external information. However, we human beings can. Even after the aging of human brain, there will still be a small number of new neurons, and the nerve cells of the former brain will reconnect to form a new pathway, which enables us to have the ability to constantly learn, that is, to constantly update the world view model in our mind. The current AI large language model, due to its high cost of training, generally does not change the parameters after training, which is why Chad GPT clearly tells you that his data only ends in 2021, which means that the moment the training is complete, his memory is frozen. Forever stay in 2021. So in that strict sense, AI, while intelligent, let alone self-aware, is not even alive by its very nature.

So look at all the previous headline-grabbing studies on the Internet that Chat GPT has a mind, GPT-4 has self-interested behavior and wants to escape human supervision, and it's just a joke and gossip.

But as AI becomes more and more widely used, there are still some problems. It doesn't have to be self-aware to be a threat to humans, to make AI want to enslave people. This lack of awareness can sometimes be even more dangerous, because such an AI, in order to stay true to the tasks given to it by the user, will do it regardless of the consequences. In a science fiction story, it is said that a man gave his AI an instruction to write the most beautiful Chinese characters in the world. At that time, AI was advanced enough to control and use tools to operate the Internet by itself, and could also evolve and modify itself. So in order to complete the task given to him by the master, he almost bought up all the pencils and papers in the world. But he was still not satisfied with his own work, so he designed his own machine to produce pencils and paper for practice, eventually he consumed all the forests in the world, but he was still not satisfied with his work, so he designed spaceships, explored space, went to other planets to develop wood and produce paper and pencils. Although this story sounds absurd, it is entirely possible for a powerful AI that has no self-awareness and acts only for the purpose of one person.

Comments

Sort byBest