首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
化学   1篇
物理学   1篇
  2021年   2篇
排序方式: 共有2条查询结果,搜索用时 15 毫秒
1
1.
Text classification is a fundamental research direction, aims to assign tags to text units. Recently, graph neural networks (GNN) have exhibited some excellent properties in textual information processing. Furthermore, the pre-trained language model also realized promising effects in many tasks. However, many text processing methods cannot model a single text unit’s structure or ignore the semantic features. To solve these problems and comprehensively utilize the text’s structure information and semantic information, we propose a Bert-Enhanced text Graph Neural Network model (BEGNN). For each text, we construct a text graph separately according to the co-occurrence relationship of words and use GNN to extract text features. Moreover, we employ Bert to extract semantic features. The former part can take into account the structural information, and the latter can focus on modeling the semantic information. Finally, we interact and aggregate these two features of different granularity to get a more effective representation. Experiments on standard datasets demonstrate the effectiveness of BEGNN.  相似文献   
2.
Information extraction in medical field is an important method to structure medical knowledge and discover new knowledge. Traditional methods handle this task in a pipelined manner regarding the entity recognition and relation extraction as two sub-tasks, which, however, neglects the relevance between the two of them. In recent years, the research on the joint extraction model has achieved encouraging results in the general field, yet scholarship focusing on the joint extraction model applied to medical field is insufficient. In this paper, we construct a joint extraction model based on tagging scheme for Chinese medical texts. Firstly, we design a series of pretreatment procedures for Chinese medical data to obtain effective Chinese word sequence. Then, we propose the BIOH12D1D2 tagging scheme to convert the joint extraction task into a tagging problem and to solve the overlapping entity problem. After that, we use the encoder-decoder model to obtain the tag prediction sequence. And in decoding layer, the Bert pre-training model is adopted to extract token features to enhance the feature representation ability of our model. Lastly, the joint extraction model gains a F1 value by 0.7 on CHIP-2020, which increases by 0.364 compared with the baseline.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号