In this article I will explain briefly the main conclusions presented by the author of this blog in Berlin ("International Congress of Psychology", 2008). My contribution, in collaboration with Professor Dr. Javier González Marqués (Chair of the Department of Basic Psychology at the Complutense University of Madrid), was entitled "Implementation Intentions and Artificial Agents" and establishes an interesting connection between social cognition in humans employing a particular type of intentions and its simulation and performance by intelligent artificial agents.
An intention is a type of mental state that regulates the transformation of motivational processes in volitional processes. Peter Gollwitzer distinguishes between goal intentions and implementation intentions. Goal intentions act in the strategic level whereas implementation intentions operate in the level of the planning. Goal intentions admit to be formulated by means of the expression "Intend to achieve X!", where X specifies a wished final state. However, implementation intentions can be enunciated like "I intend to do X when situation Y is encountered". This means that in an implementation intention, an anticipated situation or situational cue is linked to a certain goal directed behavior.
We have made a computer simulation that allows to compare the behavior of two artificial agents: both simulate the fulfillment of implementation intentions, but whereas one of them incarnates to A0 agent whose overturned behavior will be something more balanced towards the goal intention to obtain the reward R, A1 agent will reflect a more planning behavior, that is, more oriented towards the avoidance of obstacles and the advantage of the situational cues.
The hypothesis to demonstrate will consist of which, with a slight difference in the programming of both agents, A1 agent not only will yield a superior global performance but that will reach goal R before A0 in a greater number of occasions. This is clearly in consonance with the results of Gollwitzer and collaborators about the superiority to plan in humans the actions by means of implementation intentions as opposed to the mere attempt to execute a goal intention. Gollwitzer and Sheeran (2004) have made a meta-analytical study of the effects exerted by the formulation of implementation intentions in the behavior of achievement of goals on the part of the agents. We set out to transfer the fundamental parameters with humans to A1 agent and to compare results with A0 agent more oriented to the execution of the goal intention to reach R. According to authors (2004, p. 26), the general impact of implementation intentions on the achievement of goals is of d=0.65, based on k=94 tests that implied 8461 participants. An important effect (op. cit., p. 29) was obtained for implementation intentions when the achievement of goals was blocked by adverse contextual influences (d=0.93). The accessibility to the situational cues was of d=0.95. To A1 we have assigned a 65 percent in the achievement of the goal. We have located a difference of 30 points in the achievement of R and according to a difference of percentage in the achievement of R on the part of A0 of 16 points, A0 was assigned a degree of achievement of 81 percent. As for the accessibility of the situational cues L, this one is very high in A1 agent (95) and considering that A1 can add 30 points more than A0, taking advantage of the situations, we have assigned to A0 a percentage of accessibility of the 76 percent. Considering that the degree of avoidance of obstacles S on the part of A1 is very high (93), to A0, we have assigned a difference to it of 19 points, that is to say, of the 74 percent. However, to fall in anyone of places S counts equal reason why it affects to the penalty for both agents.
We give account of the results, once made 5000 trials, with an average of about 48 movements by trial. We have considered the total number of plays, points (average), total resumptions (average), total victories or the number of times that the agent reaches R in the first place, number of situational cues L, number of obstacles S and number of carried out movements. The system of assigned points was:
A0: start: +50: L0-L5: +20; S0-S5: -5; R: +150; D0 (dissuasive agent intercepting the agents A0 and A1): -150; penalty by each movement: -1.
A1: start: +50; L0-L5: +25; S0-S5: -5; R: +120; D0 (dissuasive agent intercepting the agents A0 and A1): -150; penalty by each movement: -1.
The diversity of tasks that the agents have to execute in the board ends up interacting of a dynamic and significant way. This still is appraised with greater forcefulness in the one perhaps that it is the most decisive and surprising result of this exercise of simulation: the one that the most planning agent A1 achieves goal R in a greater percentage of times than A0, when A0 has been programmed to perceive and to accede to R with greater facility.
We believe that our simulation has fulfilled the basic objective of supporting, in the area of Artificial Intelligence, the experimental conclusions with humans, of Gollwitzer and other authors about the superiority of the use of implementation intentions in the goal achievement, against the emphasis located in the execution of the goal intentions. As an obvious result, this task, given its limited nature, has not collected all the possibilities. Thus, the issue of the beginning of goal purpose has not been approached. Neither has the issue of the fact that the agents abandon the purpose of reaching R or that they seek alternative goals. On the other hand, not even the effect on the learning of the task as consequence of successive frustrations has been outlined. It would be interesting, to introduce agents not only based on learning rules but also adaptive agents.
2 comentarios:
Buenos Dias
Soy Estudiante en Lima Peru UNMSM
deseo comunicarme con Ud via correo electrónico asunto: sistemas complejos
Atte
Enver
psincronico@yahoo.es
Estaré encantado.
Muchas gracias y un saludo.
Publicar un comentario