FOLLOWUS
1.Key Lab of Intelligent Computing Based Big Data of Zhejiang Province, Zhejiang University, Hangzhou 310027, China
2.State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou 310027, China
3.City Cloud Technology (China) Co., Ltd., Hangzhou 310000, China
E-mail: liushengyuan@zju.edu.cn
‡Corresponding author
htl@zju.edu.cn
myq@citycloud.com.cn
纸质出版日期:2023-10-0 ,
收稿日期:2022-11-23,
录用日期:2023-04-20
Scan QR Code
刘圣源, 陈珂, 胡天磊, 等. 基于主动学习的不确定性感知补标签查询[J]. 信息与电子工程前沿(英文), 2023,24(10):1497-1503.
SHENGYUAN LIU, KE CHEN, TIANLEI HU, et al. Uncertainty-aware complementary label queries for active learning. [J]. Frontiers of information technology & electronic engineering, 2023, 24(10): 1497-1503.
刘圣源, 陈珂, 胡天磊, 等. 基于主动学习的不确定性感知补标签查询[J]. 信息与电子工程前沿(英文), 2023,24(10):1497-1503. DOI: 10.1631/FITEE.2200589.
SHENGYUAN LIU, KE CHEN, TIANLEI HU, et al. Uncertainty-aware complementary label queries for active learning. [J]. Frontiers of information technology & electronic engineering, 2023, 24(10): 1497-1503. DOI: 10.1631/FITEE.2200589.
许多主动学习方法假设学习者可便捷地向注释者询问训练数据的完整标注信息。这些方法主要试图通过最小化标注数量降低标注成本。然而,对于许多现实中的分类任务来说,精确标注实例仍然非常昂贵。为降低单次标注行为成本,本文试图解决一种新的主动学习范式,称为具有补标签的主动学习(ALCL)。ALCL学习器只针对样例特定类别提出是或否的问题。在收到标注者答案后,ALCL学习器获得一些有监督实例和更多具有补标签的训练实例,这些补标签仅表示对应标签与该实例无关。。ALCL具有两个挑战性问题:如何选择要查询的实例以及如何从这些补标签和普通标签中提取信息。针对第一个问题,在主动学习范式下提出一种基于不确定性的抽样策略。针对第二个问题,改进了一种已有的ALCL方法,同时适配了我们的抽样策略。在各种数据集上的实验结果验证了本文方法的有效性。
主动学习图片分类弱监督学习
Arnab A, Sun C, Nagrani A, et al., 2020. Uncertainty-aware weakly supervised action detection from untrimmed videos. Proc 16th European Conf on Computer Vision, p.751-768. 10.1007/978-3-030-58607-2_44https://doi.org/10.1007/978-3-030-58607-2_44
Blundell C, Cornebise J, Kavukcuoglu K, et al., 2015. Weight uncertainty in neural network. Proc 32nd Int Conf on Machine Learning, p.1613-1622.
Cipolla R, Gal Y, Kendall A, 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7482-7491. 10.1109/CVPR.2018.00781https://doi.org/10.1109/CVPR.2018.00781
Clanuwat T, Bober-Irizar M, Kitamoto A, et al., 2018. Deep learning for classical Japanese literature. https://arxiv.org/abs/1812.01718https://arxiv.org/abs/1812.01718
Culotta A, McCallum A, 2005. Reducing labeling effort for structured prediction tasks. Proc 20th National Conf on Artificial Intelligence, p.746-751.
Feng L, Kaneko T, Han B, et al., 2020. Learning with multiple complementary labels. Proc 37th Int Conf on Machine Learning, p.3072-3081.
Gal Y, Ghahramani Z, 2016. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. Proc 33rd Int Conf on Machine Learning, p.1050-1059.
Geng Y, Han ZB, Zhang CQ, et al., 2021. Uncertainty-aware multi-view representation learning. Proc 35th AAAI Conf on Artificial Intelligence, p.7545-7553. 10.1609/aaai.v35i9.16924https://doi.org/10.1609/aaai.v35i9.16924
Gonsior J, Thiele M, Lehner W, 2020. WEAKAL: combining active learning and weak supervision. Proc 23rd Int Conf on Discovery Science, p.34-49. 10.1007/978-3-030-61527-7_3https://doi.org/10.1007/978-3-030-61527-7_3
He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778. 10.1109/CVPR.2016.90https://doi.org/10.1109/CVPR.2016.90
Hu PY, Lipton ZC, Anandkumar A, et al., 2019. Active learning with partial feedback. https://arxiv.org/abs/1802.07427v2https://arxiv.org/abs/1802.07427v2
Ishida T, Niu G, Hu WH, et al., 2017. Learning from complementary labels. Proc 31st Conf on Neural Information Processing Systems, p.5639-5649.
Ishida T, Niu G, Menon A, et al., 2019. Complementary-label learning for arbitrary losses and models. Proc 36th Int Conf on Machine Learning, p.2971-2980.
Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations, p.14-17.
Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. MS Thesis, University of Toronto, Toronto, Canada.
LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324. 10.1109/5.726791https://doi.org/10.1109/5.726791
Liu SY, Hu TL, Chen K, et al., 2023. Complementary label queries for efficient active learning. Proc 6th Int Conf on Image and Graphics Processing, p.1-7. 10.1145/3582649.3582667https://doi.org/10.1145/3582649.3582667
Ren PZ, Xiao Y, Chang XJ, et al., 2021. A survey of deep active learning. ACM Comput Surv, 54(9):180. 10.1145/3472291https://doi.org/10.1145/3472291
Scheffer T, Decomain C, Wrobel S, 2001. Active hidden Markov models for information extraction. Proc 4th Int Conf on Intelligent Data Analysis, p.309-318. 10.1007/3-540-44816-0_31https://doi.org/10.1007/3-540-44816-0_31
Settles B, 2009. Active Learning Literature Survey. Technical Report No. 1648, University of Wisconsin-Madison, USA.
Settles B, 2011. From theories to queries: active learning in practice. Active Learning and Experimental Design Workshop in Conjunction with AISTATS, Article 18.
Settles B, Craven M, 2008. An analysis of active learning strategies for sequence labeling tasks. Conf on Empirical Methods in Natural Language Processing, p.1070-1079.
Sinha S, Ebrahimi S, Darrell T, 2019. Variational adversarial active learning. IEEE/CVF Int Conf on Computer Vision, p.5971-5980. 10.1109/ICCV.2019.00607https://doi.org/10.1109/ICCV.2019.00607
Wang HB, Liu WW, Zhao Y, et al., 2019. Discriminative and correlative partial multi-label learning. Proc 28th Int Joint Conf on Artificial Intelligence, p.3691-3697.
Xiao H, Rasul K, Vollgraf R, 2017. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. https://arxiv.org/abs/1708.07747https://arxiv.org/abs/1708.07747
Yoo D, Kweon IS, 2019. Learning loss for active learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.93-102. 10.1109/CVPR.2019.00018https://doi.org/10.1109/CVPR.2019.00018
Younesian T, Epema D, Chen LY, 2020. Active learning for noisy data streams using weak and strong labelers. https://arxiv.org/abs/2010.14149https://arxiv.org/abs/2010.14149
Zhang CC, Chaudhuri K, 2015. Active learning from weak and strong labelers. Proc 28th Int Conf on Neural Information Processing Systems, p.703-711.
Zhang T, Zhou ZH, 2018. Semi-supervised optimal margin distribution machines. Proc 27th Int Joint Conf on Artificial Intelligence, p.3104-3110.
Zhang ZZ, Lan CL, Zeng WJ, et al., 2020. Uncertainty-aware few-shot image classification. Proc 30th Int Joint Conf on Artificial Intelligence, p.3420-3426.
Zhou ZH, 2018. A brief introduction to weakly supervised learning. Nat Sci Rev, 5(1):44-53. 10.1093/nsr/nwx106https://doi.org/10.1093/nsr/nwx106
关联资源
相关文章
相关作者
相关机构