Subscribe: Adaptive Behavior current issue
http://adb.sagepub.com/rss/current.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
adaptive behavior  agents  decision making  emotion  emotional  emotions  expressions  human  interaction  robot  robots  social 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Adaptive Behavior current issue

Adaptive Behavior current issue



Adaptive Behavior RSS feed -- current issue



 



Grounding emotions in robots - An introduction to the special issue

2016-11-03T04:35:50-07:00

Robots inhabiting human environments need to act in relation to their own experience and embodiment as well as to social and emotional aspects. Robots that learn, act upon and incorporate their own experience and perception of others’ emotions into their responses make not only more productive artificial agents but also agents with whom humans can appropriately interact. This special issue seeks to address the significance of grounding of emotions in robots in relation to aspects of physical and homeostatic interaction in the world at an individual and social level. Specific questions concern: How can emotion and social interaction be grounded in the behavioral activity of the robotic system? Is a robot able to have intrinsic emotions? How can emotions, grounded in the embodiment of the robot, facilitate individually and socially adaptive behavior to the robot? This opening chapter provides an introduction to the articles that comprise this special issue and briefly discusses their relationship to grounding emotions in robots.




Hedonic quality or reward? A study of basic pleasure in homeostasis and decision making of a motivated autonomous robot

2016-11-03T04:35:50-07:00

We present a robot architecture and experiments to investigate some of the roles that pleasure plays in the decision making (action selection) process of an autonomous robot that must survive in its environment. We have conducted three sets of experiments to assess the effect of different types of pleasure—related versus unrelated to the satisfaction of physiological needs—under different environmental circumstances. Our results indicate that pleasure, including pleasure unrelated to need satisfaction, has value for homeostatic management in terms of improved viability and increased flexibility in adaptive behavior.




Happiness as an intrinsic motivator in reinforcement learning

2016-11-03T04:35:50-07:00

Reinforcement learning, a general and universally useful framework for learning from experience, has been broadly recognized as a critically important concept for understanding and shaping adaptive behavior, both in ethology and in artificial intelligence. A key component in reinforcement learning is the reward function, which, according to an emerging consensus, should be intrinsic to the learning agent and a matter of appraisal rather than a simple reflection of external outcomes. We describe an approach to intrinsically motivated reinforcement learning that involves various aspects of happiness, operationalized as dynamic estimates of well-being. In four experiments, in which simulated agents learned to explore and forage in simulated environments, we show that agents whose reward function properly balances momentary (hedonic) and longer-term (eudaimonic) well-being outperform agents equipped with standard fitness-oriented reward functions. Our findings suggest that happiness-based features can be useful in developing robust, general-purpose reward mechanisms for intrinsically motivated autonomous agents.




Outline of a sensory-motor perspective on intrinsically moral agents

2016-11-03T04:35:50-07:00

We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame.




Emotional affordances for human-robot interaction

2016-11-03T04:35:50-07:00

This paper provides a new concept for the improvement of human–robot interaction (HRI) models: ‘emotional affordances’. Emotional affordances are all the mechanisms that have emotional content as a way to transmit and/or collect emotional meaning about any context; it can include bodily expressions, social norms, values-laden objects or extended space, among others. With this rich concept, we open the way to new ways to understand the multimodal and complex nature of emotional mechanisms. Based on the grounded emotional mechanisms of human cognition and behaviour (that is, based and result of the bodily structure and its coupled relationship with the natural and/or social environment), the purpose of this paper is focused on the definition of a framework for the design of a taxonomy of emotional affordances, useful for a modal and improved understanding of the domains of emotional interactions that can emerge between humans and robots. This process will make possible in next research steps to define processing modules as well as to elicit visual display outputs (expressing emotions). Consequently, with this project we provide robotic experts with a unified taxonomy of human emotional affordances, useful for the improvement of HRI projects.




Can my robotic home cleaner be happy? Issues about emotional expression in non-bio-inspired robots

2016-11-03T04:35:50-07:00

In many robotic applications, a robot body should have a functional shape that cannot include bio-inspired elements, but it would still be important that the robot can express emotions, moods, or a character, to make it acceptable, and to involve its users. Dynamic signals from movement can be exploited to provide this expression while the robot is acting to perform its task. A research effort has been started to find general emotion expression models for actions that could be applied to any kind of robot to obtain believable and easily detectable emotional expressions. The need for a unified representation of emotional expression emerged. A framework to define action characteristics that could be used to represent emotions is proposed in this paper. Guidelines are provided to identify quantitative models and numerical values for parameters, which can be used to design and engineer emotional robot actions. A set of robots having different shapes, movement possibilities, and goals have been implemented following these guidelines. Thanks to the proposed framework, different models to implement emotional expression can now be compared in a sound way. The question mentioned in the title can now be answered in a justified way.




Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action

2016-11-03T04:35:50-07:00

We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.




Developing crossmodal expression recognition based on a deep neural model

2016-11-03T04:35:50-07:00

A robot capable of understanding emotion expressions can increase its own capability of solving problems by using emotion expressions as part of its own decision-making, in a similar way to humans. Evidence shows that the perception of human interaction starts with an innate perception mechanism, where the interaction between different entities is perceived and categorized into two very clear directions: positive or negative. While the person is developing during childhood, the perception evolves and is shaped based on the observation of human interaction, creating the capability to learn different categories of expressions. In the context of human–robot interaction, we propose a model that simulates the innate perception of audio–visual emotion expressions with deep neural networks, that learns new expressions by categorizing them into emotional clusters with a self-organizing layer. The proposed model is evaluated with three different corpora: The Surrey Audio–Visual Expressed Emotion (SAVEE) database, the visual Bi-modal Face and Body benchmark (FABO) database, and the multimodal corpus of the Emotion Recognition in the Wild (EmotiW) challenge. We use these corpora to evaluate the performance of the model to recognize emotional expressions, and compare it to state-of-the-art research.