Please tell me! When using a large model to extract entity naming, if the data scale is large, will each batch of extractions be compared with all previous extractions for entity naming fusion? (If some entities have the same meaning but different expressions, like this Entities should be fused). If fused, what happens when the data scale is too large and the number of entity names exceeds the context? How does the project deal with such problems?
Please tell me! When using a large model to extract entity naming, if the data scale is large, will each batch of extractions be compared with all previous extractions for entity naming fusion? (If some entities have the same meaning but different expressions, like this Entities should be fused). If fused, what happens when the data scale is too large and the number of entity names exceeds the context? How does the project deal with such problems?