本期尝试使用bias calibration技术来优化开源llm的标注效果,实验表明在zero shot下,DC方法效果不错,但few shot还需用其他技术进行提升。
引用论文:
Calibrate Before Use: Improving Few-shot Performance of Language Models: proceedings.mlr.press
Mitigating Label Biases for In-context Learning: aclanthology.org
Noisy Channel Language Model Prompting for Few-Shot Text Classification: aclanthology.org
Surface Form Competition: Why the Highest Probability Answer Isn’t Always Right: aclanthology.org
本期的文稿、参考文献,可以移步公众号【漫谈NLP】获取。