[Survey/NLP/RE] A Review and Outlook for Relation Extraction
이번에 리뷰할 논문은,
More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction
이다.
#abstract
web text 및 인간의 지식이 확장됨에 따라, RE의 “more”이 필요해짐
1 Introduction
the real world is much more complicated than this simple setting:
(1) collecting high-quality human annotations is expensive and time-consuming,
- human annotation을 적게 또는 사용하지 않는 방향으로 -> Utilizing More Data
(2) many long-tail relations cannot provide large amounts of training examples,
- 길고 복잡한 관계 보완 → Handling More Complicated Context
(3) most facts are expressed by long context consisting of multiple sentences,
- 다수의 문장에 걸쳐 표현되는 관계들 → Performing More Efficient Learning
and moreover (4) using a pre-defined set to cover those relations with open-ended growth is difficult.
→Orienting More Open Domains
→ 이들을 보완하기 위한 further study 필요
2 Background and Existing Work
2.1 Pattern Extraction Models
2.2 Statistical Relation Extraction Models (SRE)
- feature-based methods
- kernel-based methods
- Graphical methods
- encoding text into low-dimensional semantic spaces and extracting relations from textual embeddings
2.3 Neural Relation Extraction Models (NRE)
NRE에 word embedding이 사용된다; position embeddings 등
- RNN
- CNN
- GNN
- attention-based neural networks
- Transformers, pre-trained language models