Non-verbal
communications, such as facial expressions and hand gestures, have a drastic effect
on the understanding and social effect a conversation has. There are numerous
research projects showing that from psychological and sociological perspectives.
However, the current common form of communication is text, e.g. sms, chat
rooms, WhatsApp, etc. These lack almost all forms of face-to-face
communications, such as tone of voice, gestures and facial expressions. While
people use txting, to some extent, to reduce the complexity of the communication,
the medium can be enhanced if those are present. While video chats solve most
of the problems, texting will not go away once video streaming becomes more
accessible. There is the allure of not actually be seen on the other side.
I
suggest a research project to investigate the incorporation of non-verbal
communications into text-based media. The emoticons were obviously the first
step, as they nicely replace facial expressions, where J substitute and smile on the
face, while other more complex emoticons can replace others. Two axis of
extensions are suggested, namely, including gestures and automatization of
inclusions.
How
to include gestures? I believe a new form of emoticons can be incorporated. It
has been shown that gestures actually relate to the physical reality, where
gesturing the word "all" encompasses a large space and gesturing
"never mind" performs a discarding motion. I propose creating a hand-based
animations for text-based media, very similar to complex emoticons. But now,
instead of a face substituting facial expression, there will be hands
substituting gestures. One can create many such gesturecons, to include all
kinds of meaning. See http://en.wikipedia.org/wiki/List_of_gestures
for more examples. The research is the applicability and usage of these
gesturecons by chatters: will they use it? How much? In which situations? What are
the favorite gesturcons?
The
next extension to emoticons is the automatic inclusion of them. Nowadays,
facial recognition hardware and software are readily available, e.g. Kinect.
There are known algorithms to track the face and also recognize facial
expressions such as a smile, a laugh and other expression. I suggest to
integrate these automatic recognition into text-related media, such as
Facebook, WhatsApp, etc. In other words, when someone sends you a funny
picture, and you actually laugh, it will automatically detect it and send an
LOL. Research questions: will people like it, or do they like to control their
emoticons? Do people send more "fake" emoticons than real ones?
Gestures
are also readily detected. Using Kinect or similar devices, there are already
algorithms out there to detect gestures. However, there is a crux. When you're
in a text-based medium, your hands are occupied typing and you can't really
gesture anything. There are two approaches to this problem: the first is that
when dictation will become prevalent, such that you speak to text, you can at
the same time gesture to gesturecons. The second is creating a whole new field
of "typing gestures", e.g. when you lift your hand in exasperation, a
gesturecon will be apparent; when you knuckle your fingers, the appropriate
gesturecon will be inserted, etc.
Obviously, there is much more to be done in this project, but that's the fun of it, isn't it?
No comments:
Post a Comment