New to Nutbox?

AI自动驾驶的哲学伦理观点大碰撞

1 comment

cheva
76
11 months ago7 min read

如今的AI技术是一日千里,但是在AI技术最早起步的领域之一,自动驾驶,经过了多年的发展,却不离实于一日千里之间。AI技术最早起步的领域之一,自动驾驶,经过了多年的发展,却不离实于一日千里。这里面除了技术上的原因,还有很大一部分涉及到对传统人类伦理的颠覆。其中最典型的就是哲学上的电车蓝蹄。其实如果把AI在自动驾驶汽车普及,将会是人类交通史上的一场革命,带来的好处是显而易见的。因为现在全世界每年有上百万人死于交通事故,其中的绝大部分都是人为因素造成的,比如疲劳驾驶,酒后驾驶之类的。如果用AI代替人类司机,效果将是显而易见的,AI不会感到疲惫,也不会许久。初步估算,如果全部用AI代替人类司机的话,交通事故率将会下降90%,也就是可以拯救上百万人的生命和更多因为交通事故而自伤自残的人。但是世界上没有什么事情是百分之百的,在一些极端情况下,即便是AI也有可能造成人员伤亡。比如摄像这样一个极端的场景,一辆高速驾驶的自动驾驶汽车,前面突然出现了一位横穿马路的老人,行人。这个时候由于速度很快,刹车是来不及了,如果要避开行人的话,如果要转向避开行人的话,汽车就会转向防护栏。高速产生的冲击力将会让车上的乘客受到重伤甚至死亡。那么这个时候,AI应该如何做出决策,而AI决策产生的后果和责任,产生后果的责任,应该由谁来承担?这就是典型的职权上的典型难题。如果那次的情况是发生在有人驾驶汽车身上,问题就没有这么复杂了。如果排除酒后驾驶或者乱穿马路之类的违反交通规则的行为,那么这起事故造成伤亡就纯属意外,因为在高速行驶的状态下,人类的意识是无法做出反应和决策的。事故发生可能是在几十分之几百分之一秒,人类的意识是无法做出反应和决策的。但是AI却不同了,它拥有比人脑更精密的感受器和更快的处理速度,那么撞谁不撞谁?这完全是AI做出的决策和选择,有选择就必须承担后果的责任。但是AI又如何承担责任呢?这似乎是一个伦理学的难题。对于这个问题,之前介绍过的油管和B站上的频道大问题有一个非常深入的讨论,将这个问题的各个主要观点都做了介绍。观点的论据和推理过程都做了介绍,非常的有意思。这里简单介绍一下。简单来说,这些派别总体上分为三派。一派是反对应用自动驾驶技术,一派是支持应用自动驾驶技术。反对派认为,虽然自动驾驶技术会给人类社会整体带来巨大的福利,可以大大减少交通事故造成的伤亡,但是无法杜绝。并且一旦自动驾驶发生了有人员伤亡的交通事故,就无法确认责任主体,损失将由受害者自行承担。这样是不公平的,所以不应该因为对人类整体有巨大的利益,但是却损害个体的利益,就推广自动驾驶技术。这样就犯了集体主义的错误,牺牲个体,成就集体的利益。显然,我觉得这种观点是有点牵强附会的。首先,这种观点有一定的道理,但是它也只是提出了问题,并没有解决问题的方案。而支持推广自动驾驶技术的观点,则大都给出了各自的解决方案。所以,支持派内部又大致可以分成三派。一派是责任归厂家,一派是责任归车主,还有一派是责任,还有一派就有点奇葩,就是责任归汽车。归厂家派的主要观点一言以蔽之,就是蜘蛛侠的那句名言,责任能力越大,责任也越大。因为自动驾驶汽车是厂家设计的,它也是最有能力影响自动驾驶汽车决策的人。一个购买汽车的用户是不可能去了解AI的底层源代码的,以及它的具体,以及干涉它的决策过程。所以,一旦自动驾驶汽车发生交通事故,应该由厂家来承担主要责任。为车主派则认为,这样如果把责任归于厂家,实际上对社会是不利的。厂家费心费力开发新的产品适应市场需求,拯救了90%以上的交通事故受害者,却要成为其已经出售出去的汽车。所造成交通事故的责任方是完全说不通的。毕竟汽车出售出去之后,厂家就不再是它的所有者了,而且这样做会打击厂家进行创新的积极性。这就非常类似于在体制内的工作,做得越多,错得越多,反而前途黯淡。而那些躺平的或者专心钻营的人反而是提拔更快的,这就形成了逆淘汰的机制。而且说车主没有决策权,所以不用承担任何责任也是说不过去的,毕竟AI只是工具,它是听命于车主的。如果这位车主因为要赶时间抄近路,所以命令AI在人流密集的街道快速行驶而造成交通事故的话,车主难道不应该承担责任吗?就像在战争时期,一队士兵犯下了屠杀平民的责任,那给他们下命令的指挥官可以完全免除责任吗?不过最有意思的还是算汽车派的观点,他认为追究责任只是为了满足受害者或者受害者家属关于报复的心理需求。但是,惩罚的目的并不是为了实现同态复仇,满足被害人的心理需求,而是为了逞前避后,治病救人,是为了在未来产生更正的收益。比如,对最大恶极的人处以极刑,更重要的原因是要威慑其他的亡命之徒,不要铤而走险。所以从这个角度出发,对AI进行惩罚也是可以的,让他认识到错误,改进自己的算法,避免错误,也就达到了惩罚的目的。这种理由听上去似乎有点搞笑,但是如果了解AI模型的训练算法,就会认为它还是有合理之处的。AI模型的训练过程就是将AI产生的结果与标准进行对照,并得出一个损失函数,这个损失函数就可以被认为是对AI的惩罚。如果他做出的预测错得很离谱,损失函数就会很大,他就需要对模型的参数做出更多的调整,整个过程就这样不断地迭代,周而复始。但是我认为同态复仇满足受害人的心理需求也是惩罚非常重要的一部分,而不应该被去掉。你的观点呢?


Today's AI technology is progressing rapidly, but in one of the earliest fields of AI technology, autonomous driving, after years of development, it has achieved great progress. One of the earliest areas of AI technology, autonomous driving, has been developing for many years, but it is still happening. In addition to technical reasons, much of this involves the subversion of traditional human ethics. One of the most typical is the philosophical trolley blue Hooves. In fact, if AI is popularized in self-driving cars, it will be a revolution in the history of human transportation, and the benefits are obvious. Because millions of people die every year in traffic accidents around the world, and most of them are caused by human factors, such as tired driving, drunk driving and so on. If you replace a human driver with an AI, the effect will be noticeable, and the AI will not get tired, nor will it last long. According to preliminary estimates, if all AI drivers were replaced by human drivers, the traffic accident rate could be reduced by 90%, which means millions of lives could be saved and many more people could be self-injured due to traffic accidents. But nothing in this world is 100%, and in some extreme cases, even AI can cause casualties. For example, in such an extreme scene, a high-speed self-driving car suddenly appears in front of an old man crossing the road, a pedestrian. At this time because of the fast speed, the brake is too late, if you want to avoid pedestrians, if you want to turn to avoid pedestrians, the car will turn to the guardrail. The impact of high speed can cause serious injury or death to passengers. At this time, how should AI make decisions, and who should bear the consequences and responsibilities generated by AI decisions, and the responsibility for producing consequences? This is the classic problem of authority. If it had been in a human car, the problem would have been less complicated. If you exclude traffic violations such as drunk driving or jaywalking, then the death or injury was accidental because the human mind is unable to react and make decisions at high speeds. Accidents can happen in tens of hundredths of a second, and human consciousness is unable to react and make decisions. But AI is different. It has more sophisticated receptors and faster processing speed than the human brain. This is entirely a decision and choice made by the AI, and if there is a choice, there must be responsibility for the consequences. But how does AI take responsibility? This seems to be an ethical dilemma. There is a very in-depth discussion of the big issue of YouTube and the channel on B.com that was introduced earlier, covering each of the main points of view on this issue. The arguments and reasoning are presented, which is very interesting. Here is a brief introduction. To put it simply, these factions generally fall into three groups. One is against the use of autonomous driving technology, and the other is for the use of autonomous driving technology. The opposition believes that although autonomous driving technology will bring huge benefits to human society as a whole and can greatly reduce the casualties caused by traffic accidents, it cannot be eliminated. In the event of a traffic accident involving casualties caused by automatic driving, the responsible party cannot be identified and the losses will be borne by the victims themselves. This is unfair, so autonomous driving technology should not be promoted just because it is of great benefit to mankind as a whole, but at the expense of individual interests. In this way, we make the mistake of collectivism, sacrifice the individual and achieve the interests of the collective. Obviously, I find this view a little far-fetched. First of all, there is some truth to this argument, but it also raises questions, not solutions. Most of the arguments in favor of promoting autonomous driving offer their own solutions. So there are roughly three factions within the Yes camp. One is the responsibility of the manufacturer, one is the responsibility of the owner, another is the responsibility, and the other is a little weird, is the responsibility of the car. The main idea of the factory school is summed up in Spider-Man's famous saying that with great power comes great responsibility. Because self-driving cars are designed by manufacturers, they are also the people most able to influence decisions about autonomous vehicles. It is impossible for a customer buying a car to understand the underlying source code of AI, its specifics, and the decision-making processes that interfere with it. Therefore, in the event of an accident involving an autonomous vehicle, the manufacturer should bear the main responsibility. For the owners of the group believes that if the blame ascribe to the manufacturer, in fact, is unfavorable to society. The car manufacturers that have gone to great lengths to develop new products to meet market demands and save more than 90 percent of traffic accident victims will become the cars they have already sold. It makes no sense at all that the party responsible for the traffic accident is responsible. After all, when a car is sold, the manufacturer no longer owns it, and that discourages them from innovating. This is very similar to working in the system, the more you do, the more mistakes you make, the worse your future. Those who lie flat or concentrate on their work are promoted faster, creating a reverse selection mechanism. Moreover, it is unreasonable to say that the car owner does not have decision-making power, so he does not have to bear any responsibility. After all, AI is just a tool, and it takes orders from the car owner. If the car owner takes a shortcut in a hurry and orders the AI to drive fast on a crowded street, which causes an accident, shouldn't the car owner be held responsible? Just as in time of war, when a group of soldiers commits a massacre of civilians, is the commander who gave the order completely exempt from responsibility? Most interesting, though, is the motorist's argument that accountability is merely to satisfy the psychological need of the victim or the victim's family for revenge. However, the purpose of punishment is not to achieve homomorphic revenge and meet the psychological needs of the victim, but to try to evade the front, cure the disease and save the patient, in order to produce corrected earnings in the future. For example, capital punishment for the most evil people is more important to deter other desperados from taking risks. Therefore, from this point of view, it is also OK to punish AI, so that he can realize his mistakes, improve his algorithm and avoid mistakes, thus achieving the purpose of punishment. This reason may sound funny, but if you understand the training algorithm of AI model, you will think it is reasonable. The training process of AI model is to compare the results produced by AI with the standard and get a loss function, which can be considered as punishment for AI. If his prediction was wildly wrong, the loss function would be large, and he would have to make more adjustments to the parameters of the model, and the whole process would iterate and repeat itself. But I think homomorphic revenge meeting the psychological needs of the victim is also a very important part of punishment, and should not be removed. What is your opinion?

Comments

Sort byBest