Our research let computer understand and generate natural languages, i.e., natural language processing (NLP). The ultimate goal of the research is to reveal how humans understand languages and how knowledge is represented in communication.
Analyze Natural Language
We construct language resources, e.g., annotated text data, dictionaries, grammar, necessary for natural language analysis. We also develop tools and frameworks to support the construction of large language resources and multilingual data. We apply machine learning and deep learning for the foundation of natural language analysis, e.g., morphological analysis, dependency analysis, chunking and predicate argument analysis, using annotated language resources. The research includes semantic representation and compositionality by learning word and sentence representation from large text data using deep learning techniques.
Knowledge Acquisition from Natural Language
We analyze and acquire knowledge from specific domains, e.g., scientific papers or legal documents. The research involves information extraction, summarization and relation extraction by deeply understanding the contents through the analysis and inference of coreference relations for highly expertized large text data.
Generate Natural Language
We are focusing on machine translation, summarization and image captioning based on deep learning by integrating various knowledge sources in addition to training data, e.g., bilingual data.
Education and Learning Support for Natural Language
We do research on the second language acquisition mainly focusing on Japanese and English by supporting writing/reading and detecting/correcting errors.