EACL 2017 Workshop on Symbolic and Deep Learning Approaches to the Analysis of Evaluative, Affective, and Subjective Language (EASL 2017)


Machine comprehension of Evaluative, Affective and Subjective (EAS) language on par with human understanding requires identification and understanding fine grained emotions, beliefs, opinions, and judgements expressed in language both implicitly and explicitly. The goal of this workshop is to focus the attention of the NLP research community on combining deep learning/statistical NLP techniques with richer/deeper semantic representations driven by computational linguistics in analysing and understanding EAS text. This workshop is intended to throw down the gauntlet to the NLP community both in the computational linguistics camp and the machine learning camp, to achieve human-level understanding of EAS language.

We invite the NLP research community to explore approaches going beyond bag of words and bag of sentences by combining higher level linguistic insights including discourse level information, pragmatics and other contextual information along with statistical data driven techniques, for better understanding of EAS text. Approaches which explore linguistic phenomena such as figurative language, intent detection as well as the use of extra-linguistic information in terms of the socio-cultural origins of the text, social network structure origins of the text, and author social network profiles in understanding EAS text would also be of interest. We are also interested in non-trivial EAS text applications which go beyond mere identification of evaluative, emotive, subjective cue phrases to underlying causative events/reasons/arguments which give rise to these emotions/feelings/beliefs/subjective opinions. Non-trivial benchmarks for comparing machine comprehension of EAS aspects with human understanding are also welcome.

Topics of interest

Topics of interest include but are not limited to:

  • richer/deeper linguistic representations of aspects of EAS
  • understanding of cognitive processing of EAS language
  • extra-linguistic aspects of EAS language
  • socio-cultural aspects of EAS language
  • psycholinguistic aspects of EAS text analysis
  • interpretability of deep learning-based EAS analysis systems
  • identifying origins/dynamic contexts/causative events/reasons in EAS text analysis (including the fields of stance detection, subjective judgements/beliefs detection, argumentation in both emotive and evaluative texts).

Submission Requirements

We solicit both long and short paper submissions. All papers must describe substantial, original, and unpublished work. Submissions will be judged on appropriateness, clarity, originality/innovativeness, correctness/soundness, meaningful comparison, thoroughness, significance, contributions to research resources, and replicability. Each submission will be reviewed by at least three program committee members.

Both long and short papers must follow the two-column format of EACL 2017. For the original submission, long papers may be up to eight (8) pages in length plus unlimited pages of references, whereas short papers may be up to four (4) pages in length plus unlimited pages of references. Both long and short papers will be given one additional page of content for the camera-ready version.

Papers will be presented orally or as posters, as determined by the program committee. The decisions as to which papers will be presented orally and which as posters will be based on the nature rather than the quality of the work. There will be no distinction in the proceedings between papers presented orally and as posters.

As the reviewing will be blind, papers must not include authors’ names and affiliations. Furthermore, self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …” must be avoided. Instead, use citations such as “Smith (1991) previously showed …” Papers that do not conform to these requirements will be rejected without review.

Papers that have only appeared on preprint servers such as arXiv.org, do not count as previously published, and may be submitted to the workshop. Note that the version submitted for review must be suitably anonymized, and should not contain references to the prior non-archival version.