Afectividad y Sistemas Afectivos

www.fgalindosoria.com/eac/afectividad/

 

www.fgalindosoria.com/eac/

 

Fernando Galindo Soria

www.fgalindosoria.com             fgalindo@ipn.mx

Red de Desarrollo Informático     REDI

image011

libro Teoría y Práctica de los Sistemas Evolutivos 2.- Edición

 

Evolución y Sistemas Evolutivos, Sistemas Afectivos y Sistemas Conscientes

 

Evolución y Sistemas Evolutivos                 Sistemas Afectivos                Sistemas Conscientes

 

Matrices Evolutivas y Dinámica Dimensional

 

Creación de la página  www    Tenayuca, Ciudad de México a  27 de Mayo del 2007

Últimas actualizaciones   9 de Diciembre del 2008, 9 de Julio del 2009, 16 de Enero del 2010

 

ARTÍCULOS

 

Sistemas Afectivos

página html     documento en Word   documento en pdf

 

Diapositivas    documento en PowerPoint    documento en pdf

Escrito el 28 de Septiembre del 2009

 

 

 

LIGAS A SISTEMAS AFECTIVOS

 

 

sistemas afectivos

afectividad, emoción, comportamiento, sentimientos, intenciones

emotion computing, affective computing, behavior, behavior computing, Feelings

Emotional Computing, affective, affective music, belief-desire-intention (BDI) model

 

 

9/10 de Junio del 2007  Número de resultados en Google de

"Artificial emotion" 10,300

“affective computing” 162,000

“affective system”  20,500

“affective systems”  16,100

“affective computing” music 44,600

“affective computing” music "computational linguistic" 32

“affective computing” music "Generative grammar" 20

behavior computing 42,400,000

“emotion computing” 105

Personality computing 1,280,000

intentional computing 1,040,000

Human-computer interaction 23,200,000

 

*******************************************************************

 

IEEE Transactions on Affective Computing

http://www.computer.org/portal/web/tac

 

 

Research on Emotions and Human-Machine Interaction

http://emotion-research.net/

 

Artificial emotion

By Sam Allis, Globe Columnist, 2/29/2004 boston.com News The Boston Globe

http://www.boston.com/news/local/articles/2004/02/29/artificial_emotion/

 

The International Society of Research on Emotion (ISRE)

http://isre.org/prd/

 

“affective computing” 162,000

http://affect.media.mit.edu/

http://en.wikipedia.org/wiki/Affective_computing

 

 

Rosalind W. Picard

“Professor Rosalind W. Picard, Sc.D. is founder and director of the Affective Computing

Research Group at the Massachusetts Institute of Technology (MIT) Media Laboratory, codirector of the Things That Think Consortium…”

http://web.media.mit.edu/~picard/

 

Emociones. Lo que hay que tener en consideración.
Fuente: "Los ordenadores emocionales" de Rosalind W. Picard

Resumen de propiedades

http://darwin.50webs.com/Espanol/Articu00189.htm

 

Affective computing

Rosalind W. Picard

Year of Publication:  1997  (FGS, Link January 2, 2010)

“The latest scientific findings indicate that emotions play an essential role in decision making, perception, learning, and more—that is, they influence the very mechanisms of rational thinking. Not only too much, but too little emotion can impair decision making. According to Rosalind Picard, if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, even to have and express emotions.
Part 1 of this book provides the intellectual framework for affective computing. It includes background on human emotions, requirements for emotionally intelligent computers, applications of affective computing, and moral and social questions raised by the technology. Part 2 discusses the design and construction of affective computers. Although this material is more technical than that in Part 1, the author has kept it less technical than typical scientific publications in order to make it accessible to newcomers. Topics in Part 2 include signal-based representations of emotions, human affect recognition as a pattern recognition and learning problem, recent and ongoing efforts to build models of emotion for synthesizing emotions in computers, and the new application area of affective wearable computers.” 

http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=4062

http://portal.acm.org/citation.cfm?id=265013&coll=portal&dl=ACM&CFID=805871&CFTOKEN=87652701

 

Book.google

 

***************************

 

Affective Computing Portal

Homepage of Dr. Christoph Bartneck

http://www.bartneck.de/link/affective_portal.html

 

Computers estimate emotions

phisorg.com Technology : January 04, 2006

http://www.physorg.com/news9565.html

 

SSAISB Home Page

The Society for the Study of Artificial Intelligence and Simulation of Behaviour

http://www.aisb.org.uk/

Proceedings of the AISB 2004

Symposium on Emotion, Cognition, and Affective Computing

http://www.aisb.org.uk/publications/proceedings/aisb04/AISB2004-Affective-proceedings.pdf

 

Computers estimate emotions

phisorg.com  Technology : January 04, 2006

http://www.physorg.com/news9565.html

 

Computational Emergence and Computational Emotion

D.N. Davis

Neural, Emergent and Agent Technologies Group, Department of Computer Science,

The University of Hull, Kingston-upon-Hull, HU6 7RX, England

(Ligado el 6/iv/2008)

http://www2.dcs.hull.ac.uk/NEAT/dnd/papers/smc99.pdf

 

Computers with attitude

Last year a small group of scientists and entrepreneurs in Melbourne and Singapore quietly launched a business consortium with the potential to change the human psyche forever.

They are developing, in other words, emotionally intelligent computers - which their new company, Human Mind Innovations (HMI) Pty Ltd, will license and commercialise.

The Cairns Post, Thursday, May 22, 2008

http://www.cairns.com.au/article/2008/05/22/4055_local-it-news.html

 

Emotion Machine

Marvin Minsky

Minsky talks about life, love in the age of artificial intelligence

Goldberg, Carey  December 4, 2006 The Boston Globe  bostom.com

www.boston.com/business/technology/articles/2006/12/04/minsky_talks_about_life_love_in_the_age_of_artificial_intelligence/

 

The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind

by Marvin Minsky

Draft

http://web.media.mit.edu/~minsky/

Hardcover

http://www.amazon.com/exec/obidos/ISBN=0743276639/marvinminskyA/

 

Agents and affective systems

http://www.macs.hw.ac.uk/~ruth/agents.html

 

*************************************************************************

An abstract machine based on classical association psychology

Richard F. Reiss

May 1962

 

AIEE-IRE '62 (Spring): Proceedings of the May 1-3, 1962, spring joint computer conference

Publisher: ACM

 

Abstract

The theories of classical association psychology (circa 1750-1900) attempted to explain human thought processes in terms of certain mechanistic forces operating on discrete entities called "sensations," "images," and "ideas." Although these theories ...”

 

Full text available: Pdf ACM (1.95 MB)

http://portal.acm.org/ft_gateway.cfm?id=1460840&type=pdf&coll=ACM&dl=ACM&CFID=46991025&CFTOKEN=30081638

 

*************************************************************************

Talking to strangers: an evaluation of the factors affecting electronic collaboration

Steve Whittaker

CSCW '96: Proceedings of the 1996 ACM conference on Computer supported cooperative work, November 1996, Publisher: ACM

 

“This empirical study examines factors influencing the success of a commercial groupware system in creating group archives and supporting asynchronous communication. The study investigates the use of Lotus NotesTM in a workplace setting. We interviewed 21 Notes

users and identified three factors that they thought contributed to the successful use of Notes databases for archiving and communication. We then tested the effect of these factors on 15,571 documents in 20 different databases.

Contrary to our users’ beliefs, we found the presence of an active database moderator actually inhibited discussions, and reduced browsing. Further paradoxical results were that

conversations and the creation of group archives were more successful in databases with large numbers of diverse participants. Conversations and archiving were less successful in smaller, more homogeneous, project teams.

Database size was also important: a large database containing huge amounts of information was more likely to be used for further conversations and archiving, than a small one. This result again ran counter to users’ beliefs that small databases are superior. We discuss possible reasons for these findings in terms of critical mass and media competition, and conclude with implications for design.”

 

Full text available: Pdf ACM (1.22 MB)

http://portal.acm.org/ft_gateway.cfm?id=240352&type=pdf&coll=GUIDE&dl=GUIDE&CFID=46388861&CFTOKEN=29620316

 

*************************************************************************

Ontology Based Affective Context Representation

Kuderna-Iulian Benta, Anca Rarău, Marcel Cremene

EATIS '07: Proceedings of the 2007 Euro American conference on Telematics and information systems, May 2007, Publisher: ACM

 

Abstract

“In this paper we propose an ontology based representation of the affective states for context aware applications that allows expressing the complex relations that are among the affective states and between these and the other context elements. This representation ...”

 

Full text available: Pdf ACM (127.64 KB)

http://portal.acm.org/ft_gateway.cfm?id=1352741&type=pdf&coll=ACM&dl=ACM&CFID=45405196&CFTOKEN=33693310

 

*********************************************************

What would they think?: a computational model of attitudes

Hugo Liu, Pattie Maes

 

IUI '04: Proceedings of the 9th international conference on Intelligent user interfaces, January 2004

Publisher: ACM

 

Abstract

“A key to improving at any task is frequent feedback from people whose opinions we care about: our family, friends, mentors, and the experts. However, such input is not usually available from the right people at the time it is needed most, and attaining ...”

 

Full text available: Pdf ACM (350.99 KB)

http://portal.acm.org/ft_gateway.cfm?id=964451&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

Emotional content considered dangerous

Stephen W. Smoliar, Univ. of Pennsylvania, Philadelphia

Communications of the ACM , Volume 17 Issue 3, Pages: 164 – 165, March 1974

 

Abstract

“I had hoped that Moorer's rebuttal to my short communication in the November 1972 Communications would close the debate on a topic which, like the computer itself, has provoked an inordinately large quantity of unqualified argument. Unfortunately, the short communications by McMorrow and Wexelblat in the May 1973 Communications lead me to believe that my position is still grossly misunderstood. Therefore, allow me to clarify these matters.”

 

Full text available: Pdf ACM (199 KB)

http://portal.acm.org/ft_gateway.cfm?id=360909&type=pdf&coll=ACM&dl=ACM&CFID=46509235&CFTOKEN=94853227

 

*********************************************************

An emotion model using emotional memory and consciousness occupancy ratio

Sung June Chang, ETRI, Yuseong-gu, Taejeon

In Ho Lee,  ETRI, Yuseong-gu, Taejeon

ICAT; Vol. 157
Proceedings of the 2005 international conference on Augmented tele-existence

POSTER SESSION: Poster section, Pages: 272 – 272, Year of Publication: 2005

Christchurch, New Zealand

Publisher ACM  New York, NY, USA

 

Abstract

“This paper focuses on general emotion model which can be used in cyber characters in VR. Our model shows the various kinds of emotional transition whose factors are ranged from single variable to multiple variables by emotional memory and Consciousness Occupancy Ratio (COR). This model also shows emotional memory recall which is an established theory in Psychology. In the last part, the simulation using a simple interactive agent successfully displays emotional and mental transitions similar to those of the real creature.”

 

Full text available: Pdf ACM (923 KB)

http://portal.acm.org/ft_gateway.cfm?id=1152462&type=pdf&coll=GUIDE&dl=GUIDE&CFID=72196263&CFTOKEN=93177036

 

**************************************************

 

 

*******************************************************************

 

Fractals show machine intentions
By Eric Smalley, Technology Research News, June 16/23, 2004

http://www.trnmag.com/Stories/2004/061604/Fractals_show_machine_intentions_061604.html

 

 

*******************************************************************

 

computer–human interaction (CHI)

Human-computer interaction 23,200,000

http://en.wikipedia.org/wiki/Human-computer_interaction

 

*******************************************************************

 

Computers That Understand How You Feel
Rianne Wanders, University of Twente October 15, 2008

“A navigation system able to provide emergency services with the quickest route while at the same time taking stress into account; this is an example of a new type of dialogue system developed by PhD candidate Trung Bui of the University of Twente. His dialogue system recognizes the user’s emotions and is able to react to them.

http://www.utwente.nl/nieuws/pers/en/cont_08-041_en.doc/

                      

Intel Software Accelerates Development Of Computers That 'Anticipate' The Needs Of Users

Vancouver, British Columbia, Dec. 8, 2003

http://www.intel.com/pressroom/archive/releases/20031208tech.htm?iid=HPAGE%2Blow_news_031208a&

 

Chill Out, Your Computer Knows What's Best for You
ICT Results, June 18, 2008

“Computers are starting to become more human-centric, anticipating your needs and smoothly acting to meet them. Much of the progress can be attributed to work done by European researchers”

http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89804

 

Helping Children Find What They Need on the Internet
By Stefanie Olsen, New York Times December 25, 2009

http://www.nytimes.com/2009/12/26/technology/internet/26kidsearch.html?_r=2

 

 

NWO Researcher Develops a 'Blacklist' of Expressions
NWO (Netherlands Organization for Scientific Research)

lunes, 21 de diciembre de 2009

“List helps computers understand expressions with more than one meaning
Computers might well be 'with it', but 'they haven't got a clue' about expressions. Dutch researcher Nicole has come up with a solution to this problem: she has prepared a list of unpredictable word combinations that might, for instance, have a literal as well as a metaphorical meaning. The structuring of this list is such that it can be used by many different computer systems. Now at last your car navigation system might one day understand that you really do want to 'throw it out of the window'.”

http://www.alphagalileo.org/ViewItem.aspx?ItemId=65054&CultureCode=en

 

 

******************************************

Are you bored?: Maybe an interface agent can help!

Nilma Perera, Gregor Kennedy, Jon Pearce

OZCHI '08: Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat, December 2008

Publisher: ACM

 

Abstract

“In this paper we present the influence of Emotive Interface Agents on task-induced boredom. We studied the effects of two agents --- friendly and unfriendly. The results show that, like human-human interaction, emotional contagion can happen between ...”

 

Full text available:  Pdf (514.34 KB)

http://portal.acm.org/ft_gateway.cfm?id=1517760&type=pdf&coll=ACM&dl=ACM&CFID=46865330&CFTOKEN=32004224

 

 

*******************************************************************

 

Non verbal Communication

Comunicación no verbal

 

Eye Robot Aims to Crack Secret of Nonverbal Communication

Japanese robot communicates using eye movements alone.

Technology Review, Thursday, April 16, 2009

http://www.technologyreview.com/blog/arxiv/23383/

 

 

SPECIAL: Communicating with more than words

ICT Results, April 15, 2009

“ICT is all about communication but it has never provided the semantic richness offered by the non-verbal cues that typically pepper face-to-face conversation. Thanks to European researchers, that is about to change.

Modern information and communication technology provides a vast array of channels for all types of communication: web pages, SMS, email, twitter, and the list goes on and on.

But despite the breadth of our communication channels, what we say still lacks the depth of face-to-face communication. That is because, so far, ICT has failed to provide the non-verbal cues and context information that is so important to the way we communicate.

When we talk we use non-verbal cues like facial expressions or physical movements. We know if the people we are talking to are excited, busy, relaxed, tired, bored, etc. In electronic communication this sort of information is missing.

Surfers have tried to fill the gap with emoticons, smiley faces or other cues that hint at the intent of the writer around the words themselves. These cues explain such subtle qualities as sarcasm, teasing, anger or irony – vital contextual information for successful communication.

It is a workaround, but it hardly approaches the depth of communication we have when face to face with someone.

Now, European researchers believe they have developed a system that could finally add reliable context and mood information to voice and text communication.”

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=90497

 

*******************************************************************

 

CMU at forefront in building thinking machines

Mark Houser, Pittsburgh Tribune-Review, 06 April 2008

http://www.pittsburghlive.com/x/pittsburghtrib/news/cityregion/s_560910.html

 

European expert platform to address measurement of human emotions

http://www.innovations-report.de/html/berichte/informationstechnologie/bericht-93837.html

Virginia Mercouri, innovations-report, 30.10.2007

 

Technology tunes into our emotions

Dani Cooper

ABC Science Online, Wednesday, 31 October 2007

http://abc.net.au/science/news/stories/2007/2075023.htm?health

 

Emotion-Recognition Software Knows What Makes You Smile

Nicole Martinelli  WIRED  07.16.07 | 2:00 AM

http://www.wired.com/science/discoveries/news/2007/07/expression_research

 

 

*************************************************************************

Sistemas afectivos, vida artificial y actores sintéticos

 

Artificial life as a path from computing to philosophy

Drue Coles

Journal of Computing Sciences in Colleges , Volume 24 Issue 6

Publisher: Consortium for Computing Sciences in Colleges, June 2009

 

Abstract

“A typical undergraduate program in computer science might make contact with the field of philosophy at several points in courses on artificial intelligence and the theory of computation. This article discusses another potential point of contact: artificial ...2

 

Full text available: Pdf ACM  (61.28 KB)

http://portal.acm.org/ft_gateway.cfm?id=1530015&type=pdf&coll=ACM&dl=ACM&CFID=47166338&CFTOKEN=60514033

 

*********************************************************

Real-time individualized virtual humans

Nadia Magnenat-Thalmann, Daniel Thalmann

SIGGRAPH ASIA 2008 courses, December 2008

 

ABSTRACT

This tutorial will present the latest techniques to model fast individualized animatable virtual humans for Real-Time applications. As a human is composed of a head and a body, we will analyze how these two parts can be modeled and globally animated ...”

 

Full text available: Pdf ACM (11.13 MB)

http://portal.acm.org/ft_gateway.cfm?id=1508097&type=pdf&coll=ACM&dl=ACM&CFID=46865330&CFTOKEN=32004224

 

*********************************************************

Synthetic characters as multichannel interfaces

Elena Not, Koray Balci, Fabio Pianesi, Massimo Zancanaro          

ICMI '05: Proceedings of the 7th international conference on Multimodal interfaces, October 2005

Publisher: ACM

 

Abstract

“Synthetic characters are an effective modality to convey messages to the user, provide visual feedback about the system internal understanding of the communication, and engage the user in the dialogue through emotional involvement. In this paper we argue for a fine-grain distinction of the expressive capabilities of synthetic agents: avatars should not be considered as an indivisible modality but as the synergic contribution of different communication channels that, properly synchronized, generate an overall communication performance. In this view, we propose SMIL-AGENT as a representation and scripting language for synthetic characters, which abstracts away from the specific implementation and context of use of the character. SMIL-AGENT has been defined starting from SMIL 0.1 standard specification and aims at providing a high-level standardized language for presentations by different synthetic agents within diverse communication and application contexts.”

 

Full text available: Pdf ACM (1.07 MB)

http://portal.acm.org/ft_gateway.cfm?id=1088499&type=pdf&coll=ACM&dl=ACM&CFID=46388861&CFTOKEN=29620316

 

 

*******************************************************************

 

Personality in Computer Characters

“Personality characterizes an individual through a set of traits that influence his or her behavior. We propose a model of personality that can be used by intelligent, automated actors able to improvise their behavior and to interact with users in a multimedia environment. Users themselves become actors by exercising high-level control over their own intelligent agents. We propose different dimensions of personality that are based on the processes that intelligent agents usually perform. These dimensions are rich enough to allow the specification of an interesting number of characters able to improvise and react differently although they are put in the same context. We show the influence that the personality traits have on an actor's behavior, moods and relationships.”

http://citeseer.ist.psu.edu/90389.html

 

Bringing Second Life To Life: Researchers Create Character With Reasoning Abilities of a Child

Rensselaer Polytechnic Institute, Amber Cleveland, 3 March 2008

http://news.rpi.edu/update.do?artcenterkey=2410

 

 

*************************************************************************

Affective Games / Juegos Afectivos

 

Affective game engines: motivation and requirements

Eva Hudlicka

FDG '09: Proceedings of the 4th International Conference on Foundations of Digital Games, April 2009, Publisher: ACM

 

Abstract

“The tremendous advances in gaming technologies over the past decade have focused primarily on the physical realism of the game environment and game characters, and the complexity and performance of game simulations and networking. However, current games ...”

 

Full text available: Pdf (700.04 KB)

http://portal.acm.org/ft_gateway.cfm?id=1536565&type=pdf&coll=ACM&dl=ACM&CFID=45405196&CFTOKEN=33693310

 

 

Videojuego que opera en base a emociones.

Universia, Martes 06 Junio 2006

http://www.universia.net.mx/index.php/news_user/content/view/full/38029/

Funciona Video Juego con Emociones

El Lider USA, 05 viii 2006

http://www.elliderusa.com/news.php?nid=1158

 

 

*******************************************************************

Affective Music / Música Afectiva

 

 

Music and Emotion: Theory and Research (Series in Affective Science) (Paperback)

~ Patrik N. Juslin (Editor), John A. Sloboda (Editor) "

 

Product Description

“This new volume in the Series in Affective Science is the first book in over 40 years to tackle the complex and powerful relationship between music and emotion. The book brings together leading researchers in both areas to present the first integrative review of this powerful relationship. This is a book long overdue, and one that will fascinate psychologists, musicologists, music educators, and philosophers.”

http://www.amazon.com/Music-Emotion-Research-Affective-Science/dp/0192631888

 

***************************

Music, Mind and Machine   MIT Media Lab

http://sound.media.mit.edu/

http://www.media.mit.edu/research/ResearchPubWeb.pl?ID=20

 

The Affective Remixer: Personalized Music Arranging

http://affect.media.mit.edu/projects.php?id=2084

 

Generative Model for the Creation of Musical Emotion, Meaning, and Form

David Birchfield. Arts, Media, and Engineering Program. Institute for Studies in the Arts Arizona State University

http://ame2.asu.edu/faculty/dab/research/publications/ETP03_Birchfield.pdf

 

*********************************************************

Emotional remapping of music to facial animation

Steve DiPaola, Ali Arya

July 2006

 

Sandbox '06: Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames

Publisher: ACM

 

Abstract

“We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multi-instrument polyphonic music scores in MIDI format and a remapping rule set.”

 

Full text available: Pdf ACM (644.32 KB)

http://portal.acm.org/ft_gateway.cfm?id=1183337&type=pdf&coll=ACM&dl=ACM&CFID=47166338&CFTOKEN=60514033

 

*********************************************************

The affective remixer: personalized music arranging

Jae-woo Chung, G. Scott Vercoe

CHI '06: CHI '06 extended abstracts on Human factors in computing systems, April 2006, Publisher: ACM

 

Abstract

“This paper describes a real-time music-arranging system that reacts to immediate affective cues from a listener. Data was collected on the potential of certain musical dimensions to elicit change in a listener's affective state using sound files created ...”

 

Full text available: Pdf ACM (457.92 KB)

http://portal.acm.org/ft_gateway.cfm?id=1125535&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

CAUI demonstration: composing music based on human feelings

Masayuki Numao, Shoichi Takagi, Keisuke Nakamura

Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan

Eighteenth national conference on Artificial intelligence, July 2002, Publisher: American Association for Artificial Intelligence

 

Abstract

“We demonstrate a method to locate relations and constraints between a music score and its impressions, by which we show that machine learning techniques may provide a powerful tool for composing music and analyzing human feelings. We examine its generality by modifying some arrangements to provide the subjects with a specified impression. This demonstration introduces some user interfaces, which are capable of predicting feelings and creating new objects based on seed structures, such as spectra and their transition for sounds that have been extracted and are perceived as favorable by the test subject.”

 

PDF

http://www.ai.sanken.osaka-u.ac.jp/files/Numao-caui-demo.pdf

 

*********************************************************

A Case Based Approach to Expressivity-Aware Tempo Transformation.

Maarten Grachten, Josep-Lluís Arcos, Ramon López Mántaras

Machine Learning Volume 65, Issue 2-3 (pp. 411–437), Kluwer Academic Publishers (Eds.) MA, USA. (December 2006)

 

Abstract

“The research presented in this paper is focused on global tempo transformations of music performances. We are investigating the problem of how a performance played at a particular tempo can be rendered automatically at another tempo, while preserving naturally sounding expressivity. Or, di_erently stated, how does expressiveness change with global

tempo. Changing the tempo of a given melody is a problem that cannot be reduced to just applying a uniform transformation to all the notes of a musical piece. The expressive resources for emphasizing the musical structure of the melody and the a_ective content di_er depending on the performance tempo. We present a case-based reasoning system called TempoExpress and will describe the experimental results obtained with our approach.”

 

PDF

http://digital.csic.es/bitstream/10261/2949/1/CBTempo.pdf

 

*********************************************************

Constructive adaptive user interfaces: composing music based on human feelings

Masayuki Numao, Shoichi Takagi, Keisuke Nakamura

Eighteenth national conference on Artificial intelligence, July 2002, Publisher: American Association for Artificial Intelligence

 

Abstract

“We propose a method to locate relations and constraints between a music score and its impressions, by which we show that machine learning techniques may provide a powerful tool for composing music and analyzing human feelings. We examine its generality by modifying some arrangements to provide the subjects with a specified impression. This paper introduces some user interfaces, which are capable of predicting feelings and creating new objects based on seed structures, such as spectra and their transition for sounds that have been extracted and are perceived as favorable by the test subject.”

 

PDF

http://www.aaai.org/Papers/AAAI/2002/AAAI02-030.pdf

 

*********************************************************

Music compositional intelligence with an affective flavor

Roberto Legaspi, Yuya Hashimoto, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao

 

IUI '07: Proceedings of the 12th international conference on Intelligent user interfaces, January 2007

Publisher: ACM

 

Abstract

“The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that ...”

 

Full text available: Pdf ACM (825.13 KB)

http://portal.acm.org/ft_gateway.cfm?id=1216335&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

Modelling affective-based music compositional intelligence with the aid of ANS analyses

Toshihito Sugimoto, Roberto Legaspi, Akihiro Ota, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao

Knowledge-Based Systems , Volume 21 Issue 3, April 2008

Publisher: Elsevier Science Publishers B. V.  Amsterdam, The Netherlands, The Netherlands

 

Abstract

“This research investigates the use of emotion data derived from analyzing change in activity in the autonomic nervous system (ANS) as revealed by brainwave production to support the creative music compositional intelligence of an adaptive interface. A relational model of the influence of musical events on the listener's affect is first induced using inductive logic programming paradigms with the emotion data and musical score features as inputs of the induction task. The components of composition such as interval and scale, instrumentation, chord progression and melody are automatically combined using genetic algorithm and melodic transformation heuristics that depend on the predictive knowledge and character of the induced model. Out of the four targeted basic emotional states, namely, stress, joy, sadness, and relaxation, the empirical results reported here show that the system is able to successfully compose tunes that convey one of these affective states.”

 

*********************************************************

Generative model for the creation of musical emotion, meaning and form.

David Birchfield

Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence (pp. 99–104). Berkeley, California, ACM New York, NY, USA.

Publisher ACM  New York, NY, USA

 

Abstract

“The automated creation of perceptible and compelling large-scale forms and hierarchical structures that unfold over time is a nontrivial challenge for generative models of multimedia content. Nonetheless, this is an important goal for multimedia authors and artists who work in time-dependent mediums. This paper and associated demonstration materials present a generative model for the automated composition of music.The model draws on theories of emotion and meaning in music, and relies on research in cognition and perception to ensure that the generated music will be communicative and intelligible to listeners. The model employs a coevolutionary genetic algorithm that is comprised of a population of musical components. The evolutionary process yields musical compositions which are realized as digital audio, a live performance work, and a musical score in conventional notation. These works exhibit musical features which are in accordance with aesthetic and compositional goals described in the paper.”

 

PDF

http://ame2.asu.edu/faculty/dab/research/publications/ETP03_Birchfield.pdf

 

*********************************************************

Expression and Its Discontents: Toward an Ecology of Musical Creation

Michael Gurevich, CCRMA, Stanford University, Department of Music,

Jeffrey Treviño, Center for Research in Computing and the Arts, University of California at San Diego

 

Abstract

“We describe the prevailing model of musical expression, which assumes a binary formulation of “the text” and “the act,” along with its implied roles of composer and performer. We argue that this model not only excludes some contemporary aesthetic values but also limits the communicative ability of new music interfaces.

As an alternative, an ecology of musical creation accounts for both a diversity of aesthetic goals and the complex interrelation of human and non-human agents. An ecological perspective on several approaches to musical creation with interactive technologies reveals an expanded, more inclusive view of artistic interaction that facilitates novel, compelling ways to use technology for music. This paper is fundamentally a call to consider the role of aesthetic values in the analysis of artistic processes and technologies.”

 

Full text available: Pdf ACM (361 KB)

http://portal.acm.org/ft_gateway.cfm?id=1279759&type=pdf&coll=GUIDE&dl=GUIDE&CFID=70760649&CFTOKEN=44852661

 

*********************************************************

Emotional Effects of Music: Production Rules

Klaus R. Scherer and Marcel R. Zentner

Juslin, P.N. & Sloboda, J.A. (ed.) (2001) Music and emotion: theory and research. Oxford ; New York : Oxford University Press., CHAPTER 16

 

http://psy2.ucsd.edu/~charris/SchererZentner.pdf

 

*********************************************************

Emotion Recognition Based on Physiological Changes in Music Listening

Jonghwa Kim, Elisabeth André

IEEE Transactions on Pattern Analysis and Machine Intelligence , Volume 30 Issue 12, December 2008

Publisher: IEEE Computer Society

 

Abstract

“This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\% and 70\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.”

 

*********************************************************

Music emotion recognition: the role of individuality

Yi-Hsuan Yang, Ya-Fan Su, Yu-Ching Lin, Homer H. Chen

HCM '07: Proceedings of the international workshop on Human-centered multimedia,

Augsburg, Bavaria, Germany, SESSION: Session 1, Pages: 13 – 22, September 2007

 

Abstract

It has been realized in the music emotion recognition (MER) community that personal difference, or individuality, has significant impact on the success of an MER system in practice. However, no previous work has explicitly taken individuality into consideration in an MER system. In this paper, the group-wise MER approach (GWMER) and personalized MER approach (PMER) are proposed to study the role of individuality. GWMER evaluates the importance of each individual factor such as sex, personality, and music experience, whereas PMER evaluates whether the prediction accuracy for a user is significantly improved if the MER system is personalized for the user. Experimental results demonstrate the effect of personalization and suggest the need for a better representation of individuality and for better prediction accuracy.

 

Full text available: Pdf ACM (439 KB)

http://portal.acm.org/ft_gateway.cfm?id=1290132&type=pdf&coll=ACM&dl=ACM&CFID=45262200&CFTOKEN=77509227

 

 

*************************************************************************

Sistemas afectivos y arte

 

Affective Scene Generation

Carl Hultquist, Department of Computer Science, University of Cape Town, Cape Town, South Africa

James Gain, Department of Computer Science, University of Cape Town, Cape Town, South Africa

David Cairnsz, Department of Computing Science and Mathematics, University of Stirling, Stirling, Scotland

Afrigaph '06: Proceedings of the 4th international conference on Computer graphics, virtual reality, visualization and interaction in Africa, January 2006

Publisher: ACM

 

Abstract

“A new technique for generating virtual environments is proposed, whereby the user describes the environment that they wish to create using adjectives. An entire scene is then procedurally generated, based on the mapping of these adjectives to the parameter ...”

 

Full text available: Pdf ACM (156.42 KB)

http://portal.acm.org/ft_gateway.cfm?id=1108600&type=pdf&coll=Portal&dl=GUIDE&CFID=45326681&CFTOKEN=72054038

 

*********************************************************

Affective Scene Generation

Carl Hultquist

Welcome to the web-page for my PhD project! I am conducting research into a field that my supervisor and I have dubbed affective scene generation, which informally is the automatic generation of virtual environments from a set of adjectives specified by the user. For more technical bits and pieces, please consult some of the other pages using the links on the left.

http://people.cs.uct.ac.za/~chultqui/masters/

 

Affective Scene Generation. Masters Proposal. Carl Hultquist

http://www.slideworld.com/slideshows.aspx/Affective-Scene-Generation-ppt-1085852

 

*********************************************************

Affective Characterization of Movie Scenes Based on Multimedia Content Analysis and User's Physiological Emotional Responses

Mohammad Soleymani, Guillaume Chanel, Joep J. M. Kierkels, Thierry Pun

ISM '08: Proceedings of the 2008 Tenth IEEE International Symposium on Multimedia - Volume 00 , December 2008, Publisher: IEEE Computer Society

 

Abstract

“In this paper, we propose an approach for affective representation of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used for characterizing the emotional content of video clips for e.g. affective video indexing and retrieval, neuromarketing studies, etc. A dataset of 64 different scenes from eight movies was shown to eight participants. While watching these clips, their physiological responses were recorded. The participants were also asked to self-assess their felt emotional arousal and valence for each scene. In addition, content-based audio- and video-based features were extracted from the movie scenes in order to characterize each one. Degrees of arousal and valence were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based features. We showed that a significant correlation exists between arousal/valence provided by the spectator's self-assessments, and affective grades obtained automatically from either physiological responses or from audio-video features. This demonstrates the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content.”

 

****************************

This is the personal website of Mohammad Soleymani

http://www.soleymani.ir/doku.php

 

Movie affective characterization

“We propose an approach for affective representation of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used for characterizing the emotional content of video clips for e.g. affective video indexing and retrieval, neuromarketing studies, etc…”

http://cvml.unige.ch/doku.php/mmi/movieaffectivecharacterization

 

Research and academic

http://www.soleymani.ir/doku.php?id=research

 

*********************************************************

Affective ranking of movie scenes using physiological signals and content analysis

Mohammad Soleymani, Guillaume Chanel, Joep J.M. Kierkels, Thierry Pun

MS '08: Proceeding of the 2nd ACM workshop on Multimedia semantics, October 2008, Publisher: ACM

 

Abstract

“In this paper, we propose an approach for affective ranking of movie scenes based on the emotions that are actually felt by spectators. Such a ranking can be used for characterizing the affective, or emotional, content of video clips. The ranking can ...”

 

Full text available: Pdf ACM

http://portal.acm.org/ft_gateway.cfm?id=1460684&type=pdf&coll=ACM&dl=ACM&CFID=45405196&CFTOKEN=33693310

 

*********************************************************

E-tree: emotionally driven augmented reality art

Stephen W. Gilroy ,  University of Teesside, Middlesbrough, United Kingdom, Marc Cavazza ,  University of Teesside, Middlesbrough, United Kingdom, Rémi Chaignon ,  University of Teesside, Middlesbrough, United Kingdom, Satu-Marja Mäkelä ,  VTT Electronics, Espoo, Finland, Markus Niranen ,  VTT Electronics, Espoo, Finland, Elisabeth André ,  University of Augsburg, Augsburg, Germany, Thurid Vogt ,  University of Augsburg, Augsburg, Germany, Jérôme Urbain ,  Faculté Polytechnique de Mons, Mons, Belgium, Mark Billinghurst ,  HITLabNZ, Christchurch, New Zealand, Hartmut Seichter ,  HITLabNZ, Christchurch, New Zealand, Maurice Benayoun ,  Université Paris 1, Paris, France

 

MM '08: International Multimedia Conference, Proceeding of the 16th ACM international conference on Multimedia, SESSION: Art track short papers, Pages 945-948, Vancouver, British Columbia, Canada, October 2008

 

Abstract

“In this paper, we describe an Augmented Reality Art installation, which reacts to user behaviour using Multimodal analysis of affective signals. The installation features a virtual tree, whose growth is influenced by the perceived emotional response from spectators. The system implements a 'magic mirror' paradigm (using a large-screen display or projection system) and is based on the ARToolkit with extended representations for scene graphs. The system relies on a PAD dimensional model of affect to support the fusion of different affective modalities, while also supporting the representation of affective responses that relate to aesthetic impressions. The influence of affective input on the visual component is achieved by mapping affective data to an L-System governing virtual tree behaviour. We have performed an early evaluation of the system, both from the technical perspective and in terms of user experience. Post-hoc questionnaires were generally consistent with data from multimodal affective processing, and users rated the overall experience as positive and enjoyable, regardless of how proactive they were in their interaction with the installation.”

 

Full text available: Pdf ACM (713.97 KB)

http://portal.acm.org/ft_gateway.cfm?id=1459529&type=pdf&coll=GUIDE&dl=GUIDE&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

E-Tree: affective interactive art

Stephen W. Gilroy, Marc Cavazza, Remi Chaignon

ACE '08: Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology, December 2008

Publisher: ACM

 

Abstract

“As part of the CALLAS project [1], our aim is to explore multimodal interaction in an Arts and Entertainment context. This creative showcase is an Augmented Reality Art installation, which reacts to user behaviour using multimodal analysis of affective signals. The installation features a virtual tree, whose growth is influenced by the perceived emotional response from the spectators. The system implements a 'magic mirror' paradigm (using a large-screen display or projection system) and produces interactive graphics based on the ARToolkit [2, 3] with extended representations for scene graphs [4]. The system relies on a PAD dimensional model [5] to support the fusion of affective modalities, each input modality being represented as a PAD vector. A further advantage of the PAD model is that it supports the representation of affective responses that relate to aesthetic impressions. The influence of affective input on the visual component is achieved by mapping affective data to an L-System governing virtual tree behaviour. We have performed an early evaluation of the system, both from the technical perspective and in terms of user experience.”

 

Full text available: Pdf ACM (2.35 MB)

http://portal.acm.org/ft_gateway.cfm?id=1501843&type=pdf&coll=GUIDE&dl=GUIDE&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

An affective model of user experience for interactive art

Stephen W. Gilroy University of Teesside, Middlesbrough, UK, Marc Cavazza University of Teesside, Middlesbrough, UK, Rémi Chaignon University of Teesside, Middlesbrough, UK, Satu-Marja Mäkelä VTT Electronics, Finland, Markus Niranen VTT Electronics, Finland, Elisabeth André University of Augsburg, Germany, Thurid Vogt University of Augsburg, Germany, Jérôme Urbain Faculté Polytechnique de Mons, Belgium, Hartmut Seichter HITLabNZ, New Zealand, Mark Billinghurst HITLabNZ, New Zealand, Maurice Benayoun Citu, Université Paris, Panthéon-Sorbonne.

ACE '08: Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology,  December 2008

Publisher: ACM

 

Abstract

“The development of Affective Interface technologies makes it possible to envision a new generation of Digital Arts and Entertainment applications, in which interaction will be based directly on the analysis of user experience. In this paper, we describe an approach to the development of Multimodal Affective Interfaces that supports real-time analysis of user experience as part of an Augmented Reality Art installation. The system relies on a PAD dimensional model of emotion to support the fusion of affective modalities, each input modality being represented as a PAD vector. A further advantage of the PAD model is that it can support a representation of affective responses that relate to aesthetic impressions.”

 

Full text available: Pdf ACM (1.70 MB)

http://portal.acm.org/ft_gateway.cfm?id=1501774&type=pdf&coll=GUIDE&dl=GUIDE&CFID=45411567&CFTOKEN=71712214

 

*******************************************************************

 

Do computers understand art?

Plataforma SINC, miércoles, 23 de diciembre de 2009

“A team of researchers from the University of Girona and the Max Planck Institute in Germany has shown that some mathematical algorithms provide clues about the artistic style of a painting. The composition of colours or certain aesthetic measurements can already be quantified by a computer, but machines are still far from being able to interpret art in the way that people do.”

http://www.alphagalileo.org/ViewItem.aspx?ItemId=65264&CultureCode=en

 

 

*******************************************************************

 

The ACM Looks at Sentiment Analysis

Posted by Seth Grimes, Intelligent Enterprise, Thursday, April 2, 2009, 12:07 PM

Our Sentiments, Exactly in the April issue of the Communications of the ACM tackles sentiment analysis.”

http://www.intelligententerprise.com/blog/archives/2009/04/the_acm_looks_a.html

 

Our Sentiments, Exactly

Alex Wright, Communications of the ACM, Vol. 52 No. 4, Pages 14-15

“With sentiment analysis algorithms, companies can identify and assess the wide variety of opinions found online and create computational models of human opinion.”

http://cacm.acm.org/magazines/2009/4/22946-our-sentiments-exactly/fulltext

 

 

*******************************************************************

Affective Robotics / Robótica Afectiva

 

*************************************************************************

"Daisy, Daisy, give me your answer do!": switching off a robot

 Christoph Bartneck, Michel van der Hoek, Omar Mubin, Abdullah Al Mahmud

March 2007

HRI '07: Proceedings of the ACM/IEEE international conference on Human-robot interaction

 

Abstract

“Robots can exhibit life like behavior, but are according to traditional definitions not alive. Current robot users are confronted with an ambiguous entity and it is important to understand the users perception of these robots. This study analyses if a robot's intelligence and its agreeableness influence its perceived animacy. The robot's animacy was measured, amongst other measurements, by the users' hesitation to switch it off. The results show that participants hesitated three times as long to switch off an agreeable and intelligent robot as compared to a non agreeable and unintelligent robot. The robots' intelligence had a significant influence on its perceived animacy. Our results suggest that interactive robots should be intelligent and exhibit an agreeable attitude to maximize its perceived animacy.”

 

Full text available: Pdf ACM

http://portal.acm.org/ft_gateway.cfm?id=1228746&type=pdf&coll=ACM&dl=ACM&CFID=45819817&CFTOKEN=43501540

 

***********************************

 

An Emotional Cat Robot

Robots might behave more efficiently if they had emotions.

Duncan Graham-Rowe  Technology Review Published by MIT, Thursday, July 26, 2007

http://www.technologyreview.com/Infotech/19102/?a=f

 

Emotion robots learn from people

BBC NEWS, Friday, 23 February 2007

http://news.bbc.co.uk/2/hi/technology/6389105.stm

 

Robots with rhythm could rock your world

Celeste Biever, NewScientist.com news service  22 March 2007

http://technology.newscientist.com/channel/tech/dn11434-robots-with-rhythm-could-rock-your-world.html

 

 

The Rise of the Emotional Robot

Paul Marks, Amsterdam, New Scientist No. 2650, P. 24, 05 April 2008

http://technology.newscientist.com/channel/tech/mg19826506.100-the-rise-of-the-emotional-robot.html

 

The rise of the emotional robot

http://uk.youtube.com/watch?v=C_O6sTaS0nc

 

Robots, our new friends electric?

Alok Jha, Guardian Unlimited (UK), 14 April 2008

EU plan for first machines with personalities

http://www.guardian.co.uk/science/2008/apr/14/sciencenews.news

 

How to Make (Robot) Friends and Influence People

Technology Review,Tuesday, May 05, 2009

“The world's first robot with its own Facebook page is part of an ambitious experiment to build long-term meaningful relationships with humans.

 …But building a meaningful relationship with a robot may soon get easier if Nikolaos Mavridis and pals from the Interactive Robots and Media Lab at the United Arab Emirates University have anything to do with it. They say the key to building a longer, meaningful relationship with a robot is to become embedded in the same network of shared friends and together build a pool of shared memories that you can both refer to. Just like a real friend.”

http://www.technologyreview.com/blog/arxiv/23480/

 

 

Emotional machines

ICT Results, April 8, 2008

http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89652

 

MIT Nexi Robot Expresses Emotions

Bill Christensen ,Technovelgy.com April 6, 2008

http://www.technovelgy.com/ct/Science-Fiction-News.asp?NewsNum=1562

 

 

Emotional robots in the spotlight

ICT Results, July 17, 2008

http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/id/89893

 

Robo-relationships are virtually assured: British experts

Agence France Presse, Jul 30, 2008

http://mail.ipn.mx/Session/63866-k4mxvLOXvmS8FvlUQbXr/message.wssp?Mailbox=INBOX&MSG=5716&Unread&

 

If you're happy, the robot knows it

Celeste Biever

New ScientistTech, 22 March 2007

http://technology.newscientist.com/article/mg19325966.500-if-youre-happy-the-robot-knows-it.html

 

 

Machine rage is dead ... long live emotional computing

Consoles and robots detect and respond to users' feelings

Robin McKie, science editor

The Observer, Sunday 11 April 2004

http://www.guardian.co.uk/uk/2004/apr/11/robinmckie.theobserver

 

Crean un robot que es capaz de "expresar sentimientos"

Domingo 12, Julio 2009

Clarín.com

“Fue desarrollado en la Universidad de California. Para que aprenda gestos, lo colocan frente a un espejo. Luego, por imitación, los repite. Lo diseñaron con el rostro de Albert Einstein.”

http://www.clarin.com/diario/2009/07/12/um/m-01957024.htm

 

*******************************************************************

 

'I'm Listening' - Conversations With Computers

Lisa Mitchell, Queen's University Belfast 16 April 2008

http://www.qub.ac.uk/home/TheUniversity/GeneralServices/News/PressReleases/#d.en.96597

 

Communication with Emotional Body Language

21.03.2007 - (idw) Universitätsklinikum Tübingen

http://www.uni-protokolle.de/nachrichten/id/134060/

 

Mixed Feelings 

See with your tongue. Navigate with your skin. Fly by the seat of your pants (literally). How researchers can tap the plasticity of the brain to hack our 5 senses — and build a few new ones.

By Sunny Bains, WIRED Issue 15.04 - March 2007

http://www.wired.com/wired/archive/15.04/esp.html

 

 

*************************************************************************

Sistemas afectivos y educación

 

Evaluating the affective tactics of an emotional pedagogical agent

Patrícia Augustin Jaques, Matheus Lehmann, Sylvie Pesty

SAC '09: Proceedings of the 2009 ACM symposium on Applied Computing, March 2009

 

Abstract

“This paper presents a quantitative (with students of a local middle school) and a qualitative evaluation (with teachers) of a lifelike and emotional pedagogical agent, called Pat. Pat has the goal of inferring students' emotions and applying affective ...”

 

Full text available: Pdf ACM (327.14 KB)

http://portal.acm.org/ft_gateway.cfm?id=1529304&type=pdf&coll=ACM&dl=ACM&CFID=45405196&CFTOKEN=33693310

 

 

*************************************************************************

Sistemas afectivos y voz

 

*********************************************************

Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realisation

Thurid Vogt, Elisabeth André, Johannes Wagner    

Affect and Emotion in Human-Computer Interaction, June 2008

Publisher: Springer-Verlag

 

Abstract

“In this article we give guidelines on how to address the major technical challenges of automatic emotion recognition from speech in human-computer interfaces, which include audio segmentation to find appropriate units for emotions, extraction of emotion relevant features, classification of emotions, and training databases with emotional speech. Research so far has mostly dealt with offline evaluation of vocal emotions, and online processing has hardly been addressed. Online processing is, however, a necessary prerequisite for the realization of human-computer interfaces that analyze and respond to the user's emotions while he or she is interacting with an application. By means of a sample application, we demonstrate how the challenges arising from online processing may be solved. The overall objective of the paper is to help readers to assess the feasibility of human-computer interfaces that are sensitive to the user's emotional voice and to provide them with guidelines of how to technically realize such interfaces.”

 

*********************************************************

Comparing emotions using acoustics and human perceptual dimensions

Keshi Dai, Harriet Fell, Joel MacAuslan

CHI EA '09: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, April 2009

Publisher: ACM

 

Abstract

“Understanding the difference between emotions based on acoustic features is important for computer recognition and classification of emotions. We conducted a study of human perception of six emotions based on three perceptual dimensions and compared ...”

 

Full text available: Pdf ACM (739.93 KB)

http://portal.acm.org/ft_gateway.cfm?id=1520483&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

EmoVoice -- A Framework for Online Recognition of Emotions from Voice

Thurid Vogt, Elisabeth André, Nikolaus Bee

PIT '08: Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems

Publisher: Springer-Verlag, June 2008

 

Abstract

“We present EmoVoice, a framework for emotional speech corpus and classifier creation and for offline as well as real-time online speech emotion recognition. The framework is intended to be used by non-experts and therefore comes with an interface to create an own personal or application specific emotion recogniser. Furthermore, we describe some applications and prototypes that already use our framework to track online emotional user states from voice information.”

 

Pdf

http://mm-werkstatt.informatik.uni-augsburg.de/files/publications/211/Vogtetal-PIT08.pdf

 

*********************************************************

Does Computer-Generated Speech Manifest Personality?, An Experimental Test of Similarity-Attraction

Clifford Nass,  Department of Communication, Stanford University, Stanford, CA

Kwan Min Lee,  Department of Communication, Stanford University, Stanford, CA

Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI conference on Human factors in computing systems

The Hague, The Netherlands, Pages: 329 – 336, Year of Publication: 2000

 

Abstract

“This study examines whether people would interpret and respond to paralinguistic personality cues in computer-generated speech in the same way as they do human speech. Participants used a book-buying website and heard five book reviews in a 2 (synthesized voice personality: extrovert vs. introvert) by 2 (participant personality: extrovert vs. introvert) balanced, between-subjects experiment. Participants accurately recognized personality cues in TTS and showed strong similarity-attraction effects. Although the content was the same for all participants, when the personality of the computer voice matched their own personality: 1) participants regarded the computer voice as more attractive, credible, and informative; 2) the book review was evaluated more positively; 3) the reviewer was more attractive and credible; and 4) participants were more likely to buy the book. Match of user voice characteristics with TTS had no effect, confirming the social nature of the interaction. We discuss implications for HCI theory and design.”

 

Full text available: Pdf ACM (869 KB)

http://portal.acm.org/ft_gateway.cfm?id=332452&type=pdf&coll=ACM&dl=ACM&CFID=46509235&CFTOKEN=94853227

 

 

*************************************************************************

Sistemas afectivos y Lingüística

 

Improvising linguistic style: social and affective bases for agent personality

Marilyn A. Walker, Janet E. Cahn, Stephen J. Whittaker

AGENTS '97: Proceedings of the first international conference on Autonomous agents, February 1997, Publisher: ACM

 

Full text available: Pdf ACM

http://portal.acm.org/ft_gateway.cfm?id=267680&type=pdf&coll=GUIDE&dl=GUIDE&CFID=46388861&CFTOKEN=29620316

 

*********************************************************

A survey on sentiment detection of reviews

Huifeng Tang, Songbo Tan, Xueqi Cheng

Expert Systems with Applications: An International Journal , Volume 36 Issue 7

Publisher: Pergamon Press, Inc., September 2009

 

Abstract

“The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating ...”

 

PDF

http://www.msit2005.mut.ac.th/msit_media/1_2552/ITEC0950/Materials/2009071172143dC.pdf

 

*********************************************************

Learning to identify emotions in text

Carlo Strapparava, Rada Mihalcea

March 2008

 

SAC '08: Proceedings of the 2008 ACM symposium on Applied computing

Publisher: ACM

 

Abstract

“This paper describes experiments concerned with the automatic analysis of emotions in text. We describe the construction of a large data set annotated for six basic emotions: ANGER, DISGUST, FEAR, JOY, SADNESS and SURPRISE, and we propose and evaluate several knowledge-based and corpusbased methods for the automatic identification of these emotions in text.”

 

Full text available: Pdf ACM (121.08 KB)

http://portal.acm.org/ft_gateway.cfm?id=1364052&type=pdf&coll=ACM&dl=ACM&CFID=46388861&CFTOKEN=29620316

 

*********************************************************

Visualizing the affective structure of a text document

Hugo Liu, Ted Selker, Henry Lieberman

 

CHI '03: CHI '03 extended abstracts on Human factors in computing systems, April 2003

Publisher: ACM

 

Abstract

“This paper introduces an approach for graphically visualizing the affective structure of a text document. A document is first affectively analyzed using a unique textual affect sensing engine, which leverages commonsense knowledge to classify text more ...”

 

Full text available: Pdf ACM

http://portal.acm.org/ft_gateway.cfm?id=765961&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

A model of textual affect sensing using real-world knowledge

Hugo Liu, Henry Lieberman, Ted Selker

IUI '03: Proceedings of the 8th international conference on Intelligent user interfaces, January 2003

Publisher: ACM

 

Abstract

“This paper presents a novel way for assessing the affective qualities of natural language and a scenario for its use. Previous approaches to textual affect sensing have employed keyword spotting, lexical affinity, statistical methods, and hand-crafted ...”

 

Full text available: Pdf ACM (234.54 KB)

http://portal.acm.org/ft_gateway.cfm?id=604067&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

*********************************************************

Emotions from text: machine learning for text-based emotion prediction

Cecilia Ovesdotter Alm, Dan Roth, Richard Sproat

HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, October 2005

Publisher: Association for Computational Linguistics

 

Abstract

“In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problem empirically, using supervised machine learning with the SNoW learning architecture. ...”

 

Full text available: Pdf ACM

http://portal.acm.org/ft_gateway.cfm?id=1220648&type=pdf&coll=ACM&dl=ACM&CFID=45411567&CFTOKEN=71712214

 

**************************************************

Emotion recognition from text using semantic labels and separable mixture models

Chung-Hsien Wu, Ze-Jing Chuang, Yu-Chung Lin

ACM Transactions on Asian Language Information Processing (TALIP) , Volume 5 Issue 2, June 2006

 

Abstract

“This study presents a novel approach to automatic emotion recognition from text. First, emotion generation rules (EGRs) are manually deduced from psychology to represent the conditions for generating emotion. Based on the EGRs, the emotional state of each sentence can be represented as a sequence of semantic labels (SLs) and attributes (ATTs); SLs are defined as the domain-independent features, while ATTs are domain-dependent. The emotion association rules (EARs) represented by SLs and ATTs for each emotion are automatically derived from the sentences in an emotional text corpus using the a priori algorithm. Finally, a separable mixture model (SMM) is adopted to estimate the similarity between an input sentence and the EARs of each emotional state. Since some features defined in this approach are domain-dependent, a dialog system focusing on the students' daily expressions is constructed, and only three emotional states, happy, unhappy, and neutral, are considered for performance evaluation. According to the results of the experiments, given the domain corpus, the proposed approach is promising, and easily ported into other domains.”

 

Full text available: Pdf ACM (396 KB)

http://portal.acm.org/ft_gateway.cfm?id=1165259&type=pdf&coll=ACM&dl=ACM&CFID=45262200&CFTOKEN=77509227

 

 

*************************************************************************

Sistemas afectivos y expresiones

 

Virtual Emotion to Expression: A Comprehensive Dynamic Emotion Model to Facial Expression Generation using the MPEG-4 Standard

Paula Rodrigues, Informatics Department, PUC-Rio, Brazil.

Asla Sá, TecGraf, PUC-Rio, Brazil.

Luiz Velho IMPA - Instituto de Matematica Pura e Aplicada, Brazil.

Computer Animation: chapter 6. Nova Science Publishers, November 2009.

Abstract

“In this paper we present a framework for generating dynamic facial expressions synchronized with speech, rendered using a tridimensional realistic face. Dynamic facial expressions are those temporal-based facial expressions semantically related with emotions, speech and affective inputs that can modify a facial animation behavior.

The framework is composed by an emotion model for speech virtual actors, named VeeM (Virtual emotion-to-expression Model), which is based on a revision of the emotional wheel of Plutchik model. The VeeM introduces the emotional hypercube concept in the R4 canonical space to combine pure emotions and create new derived emotions.

The VeeM model implementation uses the MPEG-4 face standard through a innovative tool named DynaFeX (Dynamic Facial eXpression). The DynaFeX is an authoring and player facial animation tool, where a speech processing is realized to allow the phoneme and viseme synchronization. The tool allows both the definition and refinement of emotions for each frame, or group of frames, as the facial animation edition using a high-level approach based on animation scripts.

The tool player controls the animation presentation synchronizing the speech and emotional features with the virtual character performance.

Finally, DynaFeX is built over a tridimensional polygonal mesh, compliant with MPEG-4 facial animation standard, what favors tool interoperability with other facial animation systems.”

 

PDF

http://www.visgraf.impa.br/Data/RefBib/PS_PDF/nova09/novaPublisher_RoSV.pdf

 

*********************************************************

Recognising emotions in human and synthetic faces: the role of the upper and lower parts of the face

Erica Costantini, Fabio Pianesi, Michela Prete

January 2005

 

IUI '05: Proceedings of the 10th international conference on Intelligent user interfaces

Publisher: ACM

 

Abstract

Embodied Conversational Agents that can express emotions are a popular topic. Yet, despite recent attempts, reliable methods are still lacking to assess the quality of facial displays. This paper extends and refines the work in [6], focusing on the role ...”

 

Full text available: Pdf ACM (244.42 KB)

http://portal.acm.org/ft_gateway.cfm?id=1040846&type=pdf&coll=ACM&dl=ACM&CFID=46388861&CFTOKEN=29620316

 

 

*************************************************************************