top of page
emog top.png

OVERVIEW

Overview
Challenge

How could we help designers who are less skilled at drawing to depict emotional expressions in an aesthetically pleasing and expressive manner?

My Role

UX Designer, Programmer

Timeline

May 2018 - September 2018 (4 months)

Solution

We present EmoG, a deep learning-based design system that supports generating emotional expressions in storyboards. As users draw a neutral face of an intended character, EmoG improves aesthetics in the hand-drawn face and generates new sketches of the character with expressions. Therefore, users can interactively specify the details such as the type, orientation and intensity of expression.

Team Members

Chaoran CHEN, Xin YAN

Mentors

Nan CAO, Yang SHI

Our Approach
01 Research

Literature review

Competitive Analysis

User Interview

02 Design

Dataset design

Algorithm design

UI design

03 Evaluate

User study

Questionnaires

My Contribution

During the research phase, I conducted a literature review on prevalent storyboard products and AI techniques for generating expression.

 

During the design phase, I was mainly responsible for the algorithm and user interface design. I also took part in the preparation of the high-quality dataset of expressions. 

 

Finally, I moderated several usability testing sessions and analyzed the user feedback. I then took part in the discussion of design implications from our study and our potential future work with my teammate.

Outcome
Research

Research

We started the project when we found some designers had difficulty in drawing aesthetically pleasing storyboard. After finishing a quick desktop research to decide on our idea,  we conducted a series of interviews with four designers to inform the design and development of EmoG.

Desktop Research

To know more about prior work on storyboard, we conducted a literature review and collect some sketches from UX designers.

sample.jpg
Our main finding includes:

· Depicting characters’ emotional experience in storyboards in addition to their physical activities is effective for conveying envisioned user attitudes, expectations, and motivations towards the proposed design.

· Many designers use existing materials to create emotional expressions of characters in storyboards instead of drawing from their scratch.

· With the recent advances in deep learning, AI can assist humans in generating aesthetic and creative graphic designs. 

User Interviews

To inform the design and development of EmoG, we conducted a series of interviews with four designers (two product designers, an interaction designer, and a service designer). In each interview, we asked the participants about 

(1) what workflows they would use for creating storyboards, 

(2) what challenges they would want to address when rendering emotions, 

(3) what potential functions they would add to the current tools to facilitate expression drawing. 

By extracting the insights derived from the interviews, we identified the following design requirements for EmoG:

emog rq.png

Design

Design

We designed EmoG in 3 aspects: (1) the high-quality dataset of emotional expression as the training set, (2) the expression generation algorithm as the backend, and (3) the user interface and interactions.

Dataset

To align EmoG’s output with the proposed design require-ments, we invited a group of designers to draw a high-quality dataset as the training set. The design criteria of the datasetwere derived from a sample survey of 48 designers, from whom we collected 42 storyboards. We analyzed three aspects in these storyboards:

(1) artistic styles and viewing angles that are commonly used to draw characters,

(2) emotions that are frequently used in storyboards,

(3) salient features that are used to depict specific emotions. 

face.png

 (a) Three artistic styles and two of three viewing angles of the sketches in our dataset (b) The divine proportion of the human face. 

face2.png

Based onthe analysis results, we constructed a dataset FaceX containing 5,240,088 face sketches over a period of three months. The dataset has the following properties:

face3.png

In total, the designers drew 2,205 pairs of eyebrows, 2,016 pairs of eyes, 1,806 noses, and 2,058 mouths that satisfy aesthetic criteria. We combined these hand-drawn facial features into different faces and placed these facial features according to the divine proportion of the human head.

Algorithm

I further developed Google’s open source algorithm sketch-RNN to synthesize emotional expressions as the backend of EmoG.The model consists of two components:

(1) an enhanced conditional variation autoencoder (VAE) framework that uses semantic and sequential information of input strokes to support multi-class expression generation,

(2) a CNN-based autoencoder module that captures the positional information of input strokes to improve aesthetics in hand-drawn sketches. 

algorithm.png

Schematic diagram of the deep generative model.

Compared with Sketch-RNN, our algorithm has better result in generating sketches of emotional expressions. First, our training set has much more samples related to emotional expressions. Thus, it supports our algorithm to learn salient features relevant to different expressions. Second, our algorithm can not only captures sequential orders of strokes but also consider the semantic and positional information of strokes in its input sketch. To know more about the algorithm, here is our paper AI-sketcher.

compare.png
User Interface

We developed EmoG based on the proposed design requirements. The user interface of EmoG is composed of two pages, the Character Creation Page and the Storyboard Creation Page. 

UI_system.png

A user first draws a neutral face of an intended character based on his/her persona in the Character Creation Page. Then in the Storyboard Creation Page, the user draws a storyboard in a sequence of frames.


In both pages, EmoG provides the Tools Panel((a)(1), (b)(1)) and the Drawing Canvas((a)(2), (b)(2)).


The Tools Panel at the top contains tools for creating and editing sketches in the Drawing Canvas. In the Character Creation Page, the Recommendation Panel((a)(3)) displays optimization results based on user input. In the Storyboard Creation Page, the Options Panel((b)(4)) lists options for specifying the character's expressions (viewing angle, expression type). The Control Panel((b)(5)) provides controls (intensity, rotation, scale) for the selected expression. The Script Pane ((b)(6)) allows the user to type in the title and script of each frame. The Navigation Pane((b)(7)) at the bottom offers a synthetic view of the storyboard and supports creating and deleting frames. 

Evaluate

Evaluate

Participants

We recruited 21 participants (12 females) with an average age of 22.10 (SD = 1.64), including UX designers, industrial designers, service designers, and digital media designers. All of the participants reported that they have experience in drawing storyboards and their drawing skills vary (very good: 14.29%, good: 28.57%, fair: 28.57%, poor: 14.29%, very poor: 14.29%).

userstudy1.png
Process

To evaluate the effectiveness of EmoG, we conducted a within-subject user study. In this study, we designed another two tools as baselines: Freehand and Clipart. We compared the usefulness, ease of use and quality of results.

userstudy2.png

Participants were first asked to draw a face of the character in the Character Creation Page and then sketch the character's expressions in each frame in the Storyboard Creation Page. We limited the topic, scripts, and the number of frames in each storyboard.


We also ensured an equal complexity across the three tasks in terms of emotion type, expression intensity, and viewing angle.
To avoid learning effects, we counterbalanced the orders of the three tools as well as their assignments to three different tasks.

After obtaining the consent from participants, we asked them to fill out a brief demographic survey. The participants were given a tutorial introduction to each task before they started. Each task lasted for less than 20 minutes. 


At the end of each task, the participants were asked to complete a questionnaire using a 7-point Likert scale. After all the tests were completed, we conducted a semi-structured interview.

Result

Here we show some results of the expression drawings from the user study. In each group from (a) to (f), we show two snapshots of a character's face with the same expression; one expresses mild emotion in mid-profile left view (left) while the other conveys intense emotion in frontal view (right). 

example.png

Comparing the results created by the three tools, we found that the hand-drawn sketches can easily break the aesthetic rules. For example, the eyes in group (a) are not placed halfway down the head.


Participants also found that it is challenging to maintain the identifiability across their drawings. In group (d), P13 used different eyes for the same character in different scenarios. Note that the facial identity is not consistent either in group (f) created by EmoG; the nose in the two sketches are slightly different. Group (c) shows that changing viewing angles may produce an unnatural look; the character in mid-profile left view has a mispositioned mouth when compared to the one in frontal view. 


We observed that EmoG can generate better expression intensities. For example, in group (e), the intense surprised face (right) has more enlarged pupils than the mild one (left) and in group (f), the intense happy face (right) has more raised lip corners than the mild one (left). 

bottom of page