A Special Issue Proposal submitted to Journal of Ambient Intelligence and Humanized Computing (AIHC)
Special Issue on Affective Social Multimedia Computing for Ambient Intelligence
Affective social multimedia computing is an emergent research topic for both affective computing and multimedia research communities. Understanding human’s affective or social status is quite useful for building ambient intelligence systems and achieving humanized computing. Social multimedia is fundamentally changing how we communicate, interact, and collaborate with other people in our daily lives. Comparing with well-organized broadcast news and professionally made videos such as commercials, TV shows, and movies, social multimedia media computing imposes great challenges to research communities. Social multimedia contains much affective information. Effective extraction of affective information from social multimedia can greatly help social multimedia computing (e.g., processing, index, retrieval, and understanding). Although much progress has been made in traditional multimedia research on multimedia content analysis, indexing, and retrieval based on subjective concepts such as emotion, aesthetics, and preference, affective social multimedia computing is a new research area. The affective social multimedia computing aims to proceed affective information from social multi-media. For massive and heterogeneous social media data, the research requires multidisciplinary understanding of content and perceptional cues from social multimedia. From the multimedia perspective, the research relies on the theoretical and technological findings in affective computing, machine learning, pattern recognition, signal/multimedia processing, computer vision, speech processing, behavior and social psychology. Affective analysis of social multimedia is attracting growing attention from industry and businesses that provide social networking sites, content-sharing services, distribute and host the media. This workshop focuses on the analysis of affective signals in interaction and social multimedia (e.g., twitter, wechat, weibo, youtube, facebook, etc).
This special issue will mainly cover the selected high quality papers (with proper extensions) from the 3rd International Workshop on Affective Social Multimedia Computing (ASMMC 2017), which will be held as a satellite workshop of Interspeech2017 in August 25, 2017 in Stockholm, Sweden (Workshop website: http://www.npu-aslp.org/asmmc2017/ ). Other submissions with original/unpublished research are also welcome. All papers will be peer reviewed and will be selected on the basis of their quality and relevance to the main theme of this special issue.
It seeks contributions on various aspects of affective computing in interaction and social multimedia on related theory, methodology, algorithms, techniques, and applications.
- Affective human-machine interaction or human-human interaction
- Affective/Emotional content analysis of images, videos, music, metadata (text, symbols, etc.)
- Affective indexing, ranking, and retrieval on big social media data
- Affective computing in social multimedia by multimodal integration (face expression, gesture, posture, speech, text/language)
- Emotional implicit tagging and interactive systems
- User interests and behavior modeling in social multimedia
- Video and image summarization based on affect • Affective analysis of social media and harvesting the affective response of crowd
- Affective generation in social multimedia, expressive text-to-speech and expressive language translation
- Applications of affective social multimedia computing
3. Tentative Schedule
Submissions due: Oct 31, 2017 Notification of the first-round review: Dec 31, 2017 Final acceptance notification: Feb 15, 2018 Final manuscript due: August 31, 2018 Publication date: Autumn 2018 (Tentative)
4. Guest Editors
Dong-Yan Huang (Corresponding Guest Editor) Research Scientist, Institute of Infocomm Research (I2R) Astar, Singapore Email: firstname.lastname@example.org
Jie Yang Fellow, IEEE Division of Information and Intelligent Systems
National Science Foundation (NSF), USA
5. Short Bio of the guest editors
Dong-Yan HUANG received the B.Sc. degree in control and information engineering and the M.Sc degree in electrical engineering from Xi’an Jiaotong University, Xi’an, China, in 1985 and 1988, respectively, and the Ph.D. degree in Système Physique et Métrologie-Communication & Electronique from the Conservatoire National des Arts et Métiers Paris (CNAM), Paris, France, in 1996. In December 1996, she began her postdoctoral research work on low-delay high-quality audio and speech codec design at UFR de Mathématiques et Informatique, Université René Descartes, Paris V. She is a Research Scientist with Institute for Infocomm Research (I2R) Singapore. Before joining I2R in Dec. 2002, she was a Senior Research Engineer with the Institute of Microelectronics, Singapore from Dec. 1997 - Dec. 2002. Her research interests include machine learning, pattern recognition, voice transformation, music information retrieval, speech/singing evaluation and synthesis, classification of paralinguistic information in speech and natural language, and interactive dialog expressive avatars. She has published over 70 papers in refereed international journals and conferences. A linear adaptive predictor developed by her with members of audio team has been adopted as normative part of MPEG-4 Audio Lossless Coding international standard. She led a research team to win the 1st prize of the Sleepiness Sub- Challenge of INTERSPEECH 2011 Speaker State Challenge. Dr. Huang is a Senior Member of the Institute of Electrical and Electronic Engineers (IEEE). She served as chairperson of IEEE Singapore WIE (Women-in-engineering) Affinity Group from 2005-2008. She has been serving as the program committees for several international conferences in the areas for the IEEE Signal Processing Letters, IEEE Trans. Audio, Speech and Language Processing, IEEE Trans. on Circuit and System II and the EURASIP Journal on Advances in Multimedia and more than 40 conferences in the fields.
Jie YANG is currently a program director in Division of Information and Intelligent Systems at National Science Foundation (NSF) of USA. Before joining NSF in 2008, he was a faculty member in School of Computer Science at Carnegie Mellon University. His research interests include: Multimodal Interaction, Multimedia processing, Computer Vision, and Pattern Recognition. He has published more than 160 technical papers in various journals and international conferences. He has involved in organizing various international conferences, including area chairs, track chairs, program co-chairs, and general co-chairs for IEEE ICME, ACM ICMI, and ACM Multimedia conferences. He has also served as an Associate Editors for IEEE Transactions on Multimedia (2004-2008) and Journal of Machine Vision and Applications (current). He is a fellow of IEEE.