Table of Links
- Abstract and Introduction
- Backgrounds
- Type of remote sensing sensor data
- Benchmark remote sensing datasets for evaluating learning models
- Evaluation metrics for few-shot remote sensing
- Recent few-shot learning techniques in remote sensing
- Few-shot based object detection and segmentation in remote sensing
- Discussions
- Numerical experimentation of few-shot classification on UAV-based dataset
- Explainable AI (XAI) in Remote Sensing
- Conclusions and Future Directions
- Acknowledgements, Declarations, and References
11 Conclusions and Future Directions
In this comprehensive review, we provided a comprehensive analysis of recent few-shot learning techniques for remote sensing across various data types and platforms. Compared to previous reviews [9], we expanded the scope to include UAV-based datasets. Our quantitative experiments demonstrated the potential of few-shot methods on various remote sensing datasets. We also emphasized the growing importance of XAI to increase model transparency and trustworthiness.
While progress has been made, ample opportunities remain to advance few-shot learning for remote sensing. Future research could explore tailored few-shot approaches for UAV data that account for unique image characteristics and onboard computational constraints. Vision transformer architectures could also be investigated for few-shot classification of very high-resolution remote sensing data. A key challenge is reducing the performance discrepancy between aerial and satellite platforms. Developing flexible techniques that handle diverse data effectively is an open problem that warrants further investigation.
On the XAI front, further work is needed to address issues unique to remote sensing like scarce labeled data, complex earth systems, and integrating domain knowledge into models. Techniques tailored for few-shot learning specifically could benefit from more research into explainable feature extraction and decision making. Explainability methods that provide feature-level and decision-level transparency without sacrificing too much accuracy or efficiency are needed. There is also potential to apply few-shot learning and XAI to new remote sensing problems like object detection, semantic segmentation, and anomaly monitoring.
To end, few-shot learning shows increasing promise for efficient and accurate analysis of remote sensing data at scale. Integrating XAI can further improve model transparency, trust, and adoption by providing humanunderstandable explanations. While progress has been made, ample challenges and opportunities remain to realize the full potential of few-shot learning and XAI across the diverse and rapidly evolving remote sensing application landscape. Advances in these interconnected fields can pave the way for remote sensing systems that learn quickly from limited data while remaining transparent, accountable, and fair.
Acknowledgements
This research/project is supported by the Civil Aviation Authority of Singapore and Nanyang Technological University, Singapore under their collaboration in the Air Traffic Management Research Institute. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Civil Aviation Authority of Singapore.
Declarations
The authors have no conflicts of interest to declare.
References
[1] Xiang, T., Xia, G., Zhang, L.: Mini-uav-based remote sensing: Techniques, applications and prospectives. arxiv 2018. arXiv preprint arXiv:1812.07770 (2018)
[2] Deng, P., Xu, K., Huang, H.: When cnns meet vision transformer: A joint framework for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters 19, 1–5 (2021)
[3] Zhang, J., Zhao, H., Li, J.: Trs: Transformers for remote sensing scene classification. Remote Sensing 13(20), 4143 (2021)
[4] He, J., Zhao, L., Yang, H., Zhang, M., Li, W.: Hsi-bert: Hyperspectral image classification using the bidirectional encoder representation from transformers. IEEE Transactions on Geoscience and Remote Sensing 58(1), 165–178 (2019)
[5] Zhong, Z., Li, Y., Ma, L., Li, J., Zheng, W.-S.: Spectral–spatial transformer network for hyperspectral image classification: A factorized architecture search framework. IEEE Transactions on Geoscience and Remote Sensing 60, 1–15 (2021)
[6] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229 (2020). Springer
[7] Xu, Z., Zhang, W., Zhang, T., Yang, Z., Li, J.: Efficient transformer for remote sensing image segmentation. Remote Sensing 13(18), 3585 (2021)
[8] Aleissaee, A.A., Kumar, A., Anwer, R.M., Khan, S., Cholakkal, H., Xia, G.-S., et al.: Transformers in remote sensing: A survey. arXiv preprint arXiv:2209.01206 (2022)
[9] Sun, X., Wang, B., Wang, Z., Li, H., Li, H., Fu, K.: Research progress on few-shot learning for remote sensing image interpretation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 2387–2402 (2021)
[10] Liu, X., Yang, T., Li, J.: Real-time ground vehicle detection in aerial infrared imagery based on convolutional neural network. Electronics 7(6), 78 (2018)
[11] Masouleh, M.K., Shah-Hosseini, R.: Development and evaluation of a deep learning model for real-time ground vehicle semantic segmentation from uav-based thermal infrared imagery. ISPRS Journal of Photogrammetry and Remote Sensing 155, 172–186 (2019)
[12] Kyrkou, C., Theocharides, T.: Emergencynet: Efficient aerial image classification for drone-based emergency monitoring using atrous convolutional feature fusion. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13, 1687–1699 (2020)
[13] Hoffer, E., Ailon, N.: Deep metric learning using triplet network. In: International Workshop on Similarity-based Pattern Recognition, pp. 84–92 (2015). Springer
[14] Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2, pp. 1735– 1742 (2006). IEEE
[15] Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. Advances in neural information processing systems 30 (2017)
[16] Kakogeorgiou, I., Karantzalos, K.: Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. International Journal of Applied Earth Observation and Geoinformation 103, 102520 (2021)
[17] Su, H., Wei, S., Yan, M., Wang, C., Shi, J., Zhang, X.: Object detection and instance segmentation in remote sensing imagery based on precise mask r-cnn. In: IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 1454–1457 (2019). IEEE
[18] Contest, D.F.: IEEE GRSS Data Fusion Contest Fusion of Hyperspectral and LiDAR Data (2013)
[19] Wei, S., Zeng, X., Qu, Q., Wang, M., Su, H., Shi, J.: Hrsid: A high-resolution sar images dataset for ship detection and instance segmentation. Ieee Access 8, 120234–120254 (2020)
[20] G. de Inteligencia Computacional (GIC), ”Hyperspectral remote sensing scenes (2020)
[21] Dam, T.: Developing generative adversarial networks for classification and clustering: Overcoming class imbalance and catastrophic forgetting. PhD thesis, UNSW Sydney (2022)
[22] Dam, T., Anavatti, S.G., Abbass, H.A.: Mixture of spectral generative adversarial networks for imbalanced hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters 19, 1–5 (2020)
[23] Sumbul, G., Charfuelan, M., Demir, B., Markl, V.: Bigearthnet: A largescale benchmark archive for remote sensing image understanding. In: IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 5901–5904 (2019). IEEE
[24] Helber, P., Bischke, B., Dengel, A., Borth, D.: Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12(7), 2217–2226 (2019)
[25] Schmitt, M., Wu, Y.-L.: Remote sensing image classification with the sen12ms dataset. arXiv preprint arXiv:2104.00704 (2021)
[26] Hu, X., Zhong, Y., Luo, C., Wang, X.: Whu-hi: Uav-borne hyperspectral with high spatial resolution (h2) benchmark datasets for hyperspectral image classification. arXiv preprint arXiv:2012.13920 (2020)
[27] Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270–279 (2010)
[28] Gerke, M.: Use of the stair vision library within the isprs 2d semantic labeling benchmark (vaihingen) (2014)
[29] Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE 105(10), 1865– 1883 (2017)
[30] Xia, G.-S., Yang, W., Delon, J., Gousseau, Y., Sun, H., Maˆıtre, H.: Structural high-resolution satellite image indexing. In: ISPRS TC VII Symposium-100 Years ISPRS, vol. 38, pp. 298–303 (2010)
[31] Xia, G.-S., Hu, J., Hu, F., Shi, B., Bai, X., Zhong, Y., Zhang, L., Lu, X.: Aid: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing 55(7), 3965–3981 (2017)
[32] Lee, G.Y., Dam, T., Ferdaus, M.M., Poenar, D.P., Duong, V.N.: Watteffnet: A lightweight and accurate model for classifying aerial disaster images. IEEE Geoscience and Remote Sensing Letters (2023)
[33] Bayanlou, M.R., Khoshboresh-Masouleh, M.: Multi-task learning from fixed-wing uav images for 2d/3d city modeling. arXiv preprint arXiv:2109.00918 (2021)
[34] Wang, H., Chen, S., Xu, F., Jin, Y.-Q.: Application of deep-learning algorithms to mstar data. In: 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 3743–3745 (2015). IEEE
[35] Huang, L., Liu, B., Li, B., Guo, W., Yu, W., Zhang, Z., Yu, W.: Opensarship: A dataset dedicated to sentinel-1 ship interpretation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11(1), 195–208 (2017)
[36] Fu, K., Zhang, T., Zhang, Y., Wang, Z., Sun, X.: Few-shot sar target classification via metalearning. IEEE Transactions on Geoscience and Remote Sensing 60, 1–14 (2021)
[37] Rostami, M., Kolouri, S., Eaton, E., Kim, K.: Sar image classification using few-shot cross-domain transfer learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0 (2019)
[38] Tuia, D., Volpi, M., Copa, L., Kanevski, M., Munoz-Mari, J.: A survey of active learning algorithms for supervised remote sensing image classification. IEEE Journal of Selected Topics in Signal Processing 5(3), 606–617 (2011)
[39] Camps-Valls, G., Tuia, D., Bruzzone, L., Benediktsson, J.A.: Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE signal processing magazine 31(1), 45–54 (2013)
[40] Zhu, X.X., Tuia, D., Mou, L., Xia, G.-S., Zhang, L., Xu, F., Fraundorfer, F.: Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine 5(4), 8–36 (2017)
[41] Liu, S., Shi, Q., Zhang, L.: Few-shot hyperspectral image classification with unknown classes using multitask deep learning. IEEE Transactions on Geoscience and Remote Sensing 59(6), 5085–5102 (2020)
[42] Geng, C., Huang, S.-j., Chen, S.: Recent advances in open set recognition: A survey. IEEE transactions on pattern analysis and machine intelligence 43(10), 3614–3631 (2020)
[43] Bai, J., Huang, S., Xiao, Z., Li, X., Zhu, Y., Regan, A.C., Jiao, L.: Fewshot hyperspectral image classification based on adaptive subspaces and feature transformation. IEEE Transactions on Geoscience and Remote Sensing 60, 1–17 (2022)
[44] Ding, C., Li, Y., Wen, Y., Zheng, M., Zhang, L., Wei, W., Zhang, Y.: Boosting few-shot hyperspectral image classification using pseudo-label learning. Remote Sensing 13(17), 3539 (2021)
[45] Pal, D., Bundele, V., Sharma, R., Banerjee, B., Jeppu, Y.: Few-shot open-set recognition of hyperspectral images with outlier calibration network. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3801–3810 (2022)
[46] Wang, Y., Liu, M., Yang, Y., Li, Z., Du, Q., Chen, Y., Li, F., Yang, H.: Heterogeneous few-shot learning for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters 19, 1–5 (2021)
[47] Hu, Y., Huang, Y., Wei, G., Zhu, K.: Heterogeneous few-shot learning with knowledge distillation for hyperspectral image classification. In: 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), pp. 601–604 (2022). IEEE
[48] Qu, Y., Baghbaderani, R.K., Qi, H.: Few-shot hyperspectral image classification through multitask transfer learning. In: 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), pp. 1–5 (2019). IEEE
[49] Tong, X., Yin, J., Han, B., Qv, H.: Few-shot learning with attentionweighted graph convolutional networks for hyperspectral image classification. In: 2020 IEEE International Conference on Image Processing (ICIP), pp. 1686–1690 (2020). IEEE
[50] Yang, P., Tong, L., Qian, B., Gao, Z., Yu, J., Xiao, C.: Hyperspectral image classification with spectral and spatial graph using inductive representation learning network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 791–800 (2020)
[51] Huang, K., Deng, X., Geng, J., Jiang, W.: Self-attention and mutual-attention for few-shot hyperspectral image classification. In: 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 2230–2233 (2021). IEEE
[52] Yokoya, N., Iwasaki, A.: Airborne hyperspectral data over chikusei. Space Appl. Lab., Univ. Tokyo, Tokyo, Japan, Tech. Rep. SAL-2016-05- 27 5 (2016)
[53] Zhao, Y., Ha, L., Wang, H., Ma, X.: Few-shot class incremental learning for hyperspectral image classification based on constantly updated classifier. In: IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 1376–1379 (2022). IEEE
[54] Bai, J., Lu, J., Xiao, Z., Chen, Z., Jiao, L.: Generative adversarial networks based on transformer encoder and convolution block for hyperspectral image classification. Remote Sensing 14(14), 3426 (2022)
[55] Huang, Z., Tang, H., Li, Y., Xie, W.: Hfc-sst: improved spatial-spectral transformer for hyperspectral few-shot classification. Journal of Applied Remote Sensing 17(2), 026509–026509 (2023)
[56] Peng, Y., Liu, Y., Tu, B., Zhang, Y.: Convolutional transformer-based few-shot learning for cross-domain hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 16, 1335–1349 (2023)
[57] Ran, Q., Zhou, Y., Hong, D., Bi, M., Ni, L., Li, X., Ahmad, M.: Deep transformer and few-shot learning for hyperspectral image classification. CAAI Transactions on Intelligence Technology (2023)
[58] Liu, L., Zuo, D., Wang, Y., Qu, H.: Feedback-enhanced few-shot transformer learning for small-sized hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters 19, 1–5 (2022)
[59] Li, A., Lu, Z., Wang, L., Xiang, T., Wen, J.-R.: Zero-shot scene classification for high spatial resolution remote sensing images. IEEE Transactions on Geoscience and Remote Sensing 55(7), 4157–4167 (2017)
[60] Zou, Q., Ni, L., Zhang, T., Wang, Q.: Deep learning based feature selection for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters 12(11), 2321–2325 (2015)
[61] Church, K.W.: Word2vec. Natural Language Engineering 23(1), 155–162 (2017)
[62] Al-Haddad, L.A., Jaber, A.A.: An intelligent fault diagnosis approach for multirotor uavs based on deep neural network of multi-resolution transform features. Drones 7(2), 82 (2023)
[63] Khoshboresh-Masouleh, M., Shah-Hosseini, R.: Multimodal few-shot target detection based on uncertainty analysis in time-series images. Drones 7(2), 66 (2023)
[64] Hamzaoui, M., Chapel, L., Pham, M.-T., Lef`evre, S.: A hierarchical prototypical network for few-shot remote sensing scene classification. In: International Conference on Pattern Recognition and Artificial Intelligence, pp. 208–220 (2022). Springer
[65] Zeng, Q., Geng, J.: Task-specific contrastive learning for few-shot remote sensing image scene classification. ISPRS Journal of Photogrammetry and Remote Sensing 191, 143–154 (2022)
[66] Yuan, Z., Huang, W., Li, L., Luo, X.: Few-shot scene classification with multi-attention deepemd network in remote sensing. IEEE Access 9, 19891–19901 (2020)
[67] Wang, Q., Liu, S., Chanussot, J., Li, X.: Scene classification with recurrent attention of vhr remote sensing images. IEEE Transactions on Geoscience and Remote Sensing 57(2), 1155–1167 (2018)
[68] Kim, J., Chi, M.: Saffnet: Self-attention-based feature fusion network for remote sensing few-shot scene classification. Remote Sensing 13(13), 2532 (2021)
[69] Huang, W., Yuan, Z., Yang, A., Tang, C., Luo, X.: Tae-net: task-adaptive embedding network for few-shot remote sensing scene classification. Remote Sensing 14(1), 111 (2021)
[70] Wang, K., Wang, X., Cheng, Y.: Few-shot aerial image classification with deep economic network and teacher knowledge. International Journal of Remote Sensing 43(13), 5075–5099 (2022)
[71] Li, L., Han, J., Yao, X., Cheng, G., Guo, L.: Dla-matchnet for fewshot remote sensing image scene classification. IEEE Transactions on Geoscience and Remote Sensing 59(9), 7844–7853 (2020)
[72] Jiang, N., Shi, H., Geng, J.: Multi-scale graph-based feature fusion for few-shot remote sensing image scene classification. Remote Sensing 14(21), 5550 (2022)
[73] Yuan, Z., Huang, W., Tang, C., Yang, A., Luo, X.: Graph-based embedding smoothing network for few-shot scene classification of remote sensing images. Remote Sensing 14(5), 1161 (2022)
[74] Tai, Y., Tan, Y., Xiong, S., Sun, Z., Tian, J.: Few-shot transfer learning for sar image classification without extra sar samples. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15, 2240–2253 (2022)
[75] Hammell, R.: Data retrieved from kaggle. Accessed: Feb 1 (2019)
[76] Schwegmann, C., Kleynhans, W., Salmon, B., Mdakane, L., Meyer, R.: A sar ship dataset for detection, discrimination and analysis. IEEE Dataport (2017)
[77] Gao, F., Xu, J., Lang, R., Wang, J., Hussain, A., Zhou, H.: A few-shot learning method for sar images based on weighted distance and feature fusion. Remote Sensing 14(18), 4583 (2022)
[78] Schwegmann, C.P., Kleynhans, W., Salmon, B.P., Mdakane, L.W., Meyer, R.G.: Very deep learning for ship discrimination in synthetic aperture radar imagery. In: 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 104–107 (2016). IEEE
[79] Wang, K., Qiao, Q., Zhang, G., Xu, Y.: Few-shot sar target recognition based on deep kernel learning. IEEE Access 10, 89534–89544 (2022)
[80] Yang, R., Xu, X., Li, X., Wang, L., Pu, F.: Learning relation by graph neural network for sar image few-shot learning. In: IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, pp. 1743–1746 (2020). IEEE
[81] Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199–1208 (2018)
[82] Zhao, X., Lv, X., Cai, J., Guo, J., Zhang, Y., Qiu, X., Wu, Y.: Fewshot sar-atr based on instance-aware transformer. Remote Sensing 14(8), 1884 (2022)
[83] Wang, C., Huang, Y., Liu, X., Pei, J., Zhang, Y., Yang, J.: Global in local: A convolutional transformer for sar atr fsl. IEEE Geoscience and Remote Sensing Letters 19, 1–5 (2022)
[84] Yang, M., Bai, X., Wang, L., Zhou, F.: Mixed loss graph attention network for few-shot sar target classification. IEEE Transactions on Geoscience and Remote Sensing 60, 1–13 (2021)
[85] Li, K., Wan, G., Cheng, G., Meng, L., Han, J.: Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS journal of photogrammetry and remote sensing 159, 296–307 (2020)
[86] Han, J., Zhang, D., Cheng, G., Liu, N., Xu, D.: Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Processing Magazine 35(1), 84–100 (2018)
[87] Zhu, H., Chen, X., Dai, W., Fu, K., Ye, Q., Jiao, J.: Orientation robust object detection in aerial images using deep convolutional neural network. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 3735–3739 (2015). IEEE
[88] Yao, H., Qin, R., Chen, X.: Unmanned aerial vehicle for remote sensing applications—a review. Remote Sensing 11(12), 1443 (2019)
[89] Mundhenk, T.N., Konjevod, G., Sakla, W.A., Boakye, K.: A large contextual dataset for classification, detection and counting of cars with deep learning. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, pp. 785–800 (2016). Springer
[90] Zhang, Y., Yuan, Y., Feng, Y., Lu, X.: Hierarchical and robust convolutional neural network for very high-resolution remote sensing object detection. IEEE Transactions on Geoscience and Remote Sensing 57(8), 5535–5548 (2019)
[91] Lu, X., Zhang, Y., Yuan, Y., Feng, Y.: Gated and axis-concentrated localization network for remote sensing object detection. IEEE Transactions on Geoscience and Remote Sensing 58(1), 179–192 (2019)
[92] Li, X., Deng, J., Fang, Y.: Few-shot object detection on remote sensing images. IEEE Transactions on Geoscience and Remote Sensing 60, 1–14 (2021)
[93] Wolf, S., Meier, J., Sommer, L., Beyerer, J.: Double head predictor based few-shot object detection for aerial imagery. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 721–731 (2021)
[94] Cheng, G., Yan, B., Shi, P., Li, K., Yao, X., Guo, L., Han, J.: Prototypecnn for few-shot object detection in remote sensing images. IEEE Transactions on Geoscience and Remote Sensing 60, 1–10 (2021)
[95] Gao, Y., Hou, R., Gao, Q., Hou, Y.: A fast and accurate few-shot detector for objects with fewer pixels in drone image. Electronics 10(7), 783 (2021)
[96] Jeune, P.L., Mokraoui, A.: Rethinking intersection over union for small object detection in few-shot regime. arXiv preprint arXiv:2307.09562 (2023)
[97] Xiao, Z., Qi, J., Xue, W., Zhong, P.: Few-shot object detection with selfadaptive attention network for remote sensing images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 4854–4865 (2021)
[98] Li, J., Tian, Y., Xu, Y., Hu, X., Zhang, Z., Wang, H., Xiao, Y.: Mmrcnn: Toward few-shot object detection in remote sensing images with meta memory. IEEE Transactions on Geoscience and Remote Sensing 60, 1–14 (2022) [
99] Wang, R., Wang, Q., Yu, J., Tong, J.: Multi-scale self-attention-based few-shot object detection for remote sensing images. In: 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), pp. 1–7 (2022). IEEE
[100] Zhang, S., Song, F., Liu, X., Hao, X., Liu, Y., Lei, T., Jiang, P.: Text semantic fusion relation graph reasoning for few-shot object detection on remote sensing images. Remote Sensing 15(5), 1187 (2023)
[101] Liu, N., Xu, X., Celik, T., Gan, Z., Li, H.-C.: Transformation-invariant network for few-shot object detection in remote sensing images. arXiv preprint arXiv:2303.06817 (2023)
[102] Petsiuk, V., Jain, R., Manjunatha, V., Morariu, V.I., Mehra, A., Ordonez, V., Saenko, K.: Black-box explanation of object detectors via saliency maps. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11443–11452 (2021)
[103] Hu, B., Tunison, P., RichardWebster, B., Hoogs, A.: Xaitk-saliency: An open source explainable ai toolkit for saliency. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 15760–15766 (2023)
[104] Nguyen, T., Miller, I.D., Cohen, A., Thakur, D., Guru, A., Prasad, S., Taylor, C.J., Chaudhari, P., Kumar, V.: Pennsyn2real: Training object recognition models without human labeling. IEEE Robotics and Automation Letters 6(3), 5032–5039 (2021)
[105] Liu, S., Zhang, L., Hao, S., Lu, H., He, Y.: Polar ray: A single-stage angle-free detector for oriented object detection in aerial images. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3124–3132 (2021)
[106] Cheng, G., Lang, C., Wu, M., Xie, X., Yao, X., Han, J.: Feature enhancement network for object detection in optical remote sensing images. Journal of Remote Sensing 2021 (2021)
[107] Le Jeune, P., Mokraoui, A.: Improving few-shot object detection through a performance analysis on aerial and natural images. In: 2022 30th European Signal Processing Conference (EUSIPCO), pp. 513–517 (2022). IEEE
[108] Li, L., Yao, X., Cheng, G., Han, J.: Aifs-dataset for few-shot aerial image scene classification. IEEE Transactions on Geoscience and Remote Sensing 60, 1–11 (2022)
[109] Su, H., You, Y., Meng, G.: Multi-scale context-aware r-cnn for few-shot object detection in remote sensing images. In: IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 1908– 1911 (2022). IEEE
[110] Su, B., Zhang, H., Wu, Z., Zhou, Z.: Fsrdd: An efficient few-shot detector for rare city road damage detection. IEEE Transactions on Intelligent Transportation Systems 23(12), 24379–24388 (2022)
[111] Lu, X., Sun, X., Diao, W., Mao, Y., Li, J., Zhang, Y., Wang, P., Fu, K.: Few-shot object detection in aerial imagery guided by text-modal knowledge. IEEE Transactions on Geoscience and Remote Sensing 61, 1–19 (2023)
[112] Wang, B., Ma, G., Sui, H., Zhang, Y., Zhang, H., Zhou, Y.: Few-shot object detection in remote sensing imagery via fuse context dependencies and global features. Remote Sensing 15(14), 3462 (2023)
[113] Yao, X., Cao, Q., Feng, X., Cheng, G., Han, J.: Scale-aware detailed matching for few-shot aerial image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing 60, 1–11 (2021)
[114] Chen, Y., Wei, C., Wang, D., Ji, C., Li, B.: Semi-supervised contrastive learning for few-shot segmentation of remote sensing images. Remote Sensing 14(17), 4254 (2022)
[115] Cao, Q., Chen, Y., Ma, C., Yang, X.: Few-shot rotation-invariant aerial image semantic segmentation. arXiv preprint arXiv:2306.11734 (2023)
[116] Zhang, P., Bai, Y., Wang, D., Bai, B., Li, Y.: Few-shot classification of aerial scene images via meta-learning. Remote Sensing 13(1), 108 (2020)
[117] Puthumanaillam, G., Verma, U.: Texture based prototypical network for few-shot semantic segmentation of forest cover: Generalizing for different geographical regions. Neurocomputing 538, 126201 (2023)
[118] Lang, C., Cheng, G., Tu, B., Han, J.: Global rectification and decoupled registration for few-shot segmentation in remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing (2023)
[119] Lang, C., Wang, J., Cheng, G., Tu, B., Han, J.: Progressive parsing and commonality distillation for few-shot remote sensing segmentation. IEEE Transactions on Geoscience and Remote Sensing (2023)
[120] Ghali, R., Akhloufi, M.A.: Deep learning approaches for wildland fires remote sensing: Classification, detection, and segmentation. Remote Sensing 15(7), 1821 (2023) [121] Park, J., Cho, Y.K., Kim, S.: Deep learning-based uav image segmentation and inpainting for generating vehicle-free orthomosaic. International Journal of Applied Earth Observation and Geoinformation 115, 103111 (2022)
[122] Song, K., Zhang, Y., Bao, Y., Zhao, Y., Yan, Y.: Self-enhanced mixed attention network for three-modal images few-shot semantic segmentation. Sensors 23(14), 6612 (2023)
[123] Wang, B., Wang, Z., Sun, X., Wang, H., Fu, K.: Dmml-net: Deep metametric learning for few-shot geographic object segmentation in remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing 60, 1–18 (2021)
[124] Wang, Z., Jiang, Z., Yuan, Y.: Queue learning for multi-class fewshot semantic segmentation. In: 2022 IEEE International Conference on Image Processing (ICIP), pp. 1721–1725 (2022). IEEE
[125] Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135 (2017). PMLR
[126] Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015)
[127] Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J.: Efficient neural architecture search via parameters sharing. In: International Conference on Machine Learning, pp. 4095–4104 (2018). PMLR
[128] Ham, T.J., Lee, Y., Seo, S.H., Kim, S., Choi, H., Jung, S.J., Lee, J.W.: Elsa: Hardware-software co-design for efficient, lightweight self-attention mechanism in neural networks. In: 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), pp. 692–705 (2021). IEEE
[129] Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. Advances in neural information processing systems 29 (2016)
[130] Wang, Y., Chao, W.-L., Weinberger, K.Q., Van Der Maaten, L.: Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623 (2019)
[131] Oreshkin, B., Rodrıguez Lopez, P., Lacoste, A.: Tadam: Task dependent adaptive metric for improved few-shot learning. Advances in neural information processing systems 31 (2018)
[132] Sun, Q., Liu, Y., Chua, T.-S., Schiele, B.: Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 403–412 (2019)
[133] Jian, Y., Torresani, L.: Label hallucination for few-shot classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 7005–7014 (2022)
[134] Mohan, A., Peeples, J.: Quantitative analysis of primary attribution explainable artificial intelligence methods for remote sensing image classification. arXiv preprint arXiv:2306.04037 (2023)
[135] Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven usercentric explainable ai. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2019)
[136] Arrieta, A.B., Dıaz-Rodr´ıguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcıa, S., Gil-Lopez, S., Molina, D., Benjamins, R., et al.: Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion 58, 82–115 (2020)
[137] Ling, J., Templeton, J.: Evaluation of machine learning algorithms for prediction of regions of high reynolds averaged navier stokes uncertainty. Physics of Fluids 27(8) (2015)
[138] Shaikh, T.A., Rasool, T., Lone, F.R.: Towards leveraging the role of machine learning and artificial intelligence in precision agriculture and smart farming. Computers and Electronics in Agriculture 198, 107119 (2022)
[139] Ishikawa, S.-n., Todo, M., Taki, M., Uchiyama, Y., Matsunaga, K., Lin, P., Ogihara, T., Yasui, M.: Example-based explainable ai and its application for remote sensing image classification. International Journal of Applied Earth Observation and Geoinformation 118, 103215 (2023)
[140] Temenos, A., Tzortzis, I.N., Kaselimi, M., Rallis, I., Doulamis, A., Doulamis, N.: Novel insights in spatial epidemiology utilizing explainable ai (xai) and remote sensing. Remote Sensing 14(13), 3074 (2022)
[141] Sirmacek, B., Vinuesa, R.: Remote sensing and ai for building climate adaptation applications. Results in Engineering 15, 100524 (2022)
[142] Gevaert, C.M.: Explainable ai for earth observation: A review including societal and regulatory perspectives. International Journal of Applied Earth Observation and Geoinformation 112, 102869 (2022)
[143] Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847 (2018). IEEE
[144] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradientbased localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
[145] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp. 3145–3153 (2017). PMLR
[146] Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., Hu, X.: Score-cam: Score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 24–25 (2020)
[147] Schulz, K., Sixt, L., Tombari, F., Landgraf, T.: Restricting the flow: Information bottlenecks for attribution. arXiv preprint arXiv:2001.00396 (2020)
[148] Liu, J.: Few-shot object detection model based on meta-learning for uav. In: Fifth International Conference on Mechatronics and Computer Technology Engineering (MCTE 2022), vol. 12500, pp. 1468–1474 (2022). SPIE
[149] Li, L., Wang, B., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: Scouter: Slot attention-based classifier for explainable image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1046–1055 (2021)
[150] Wang, B., Li, L., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: Match them up: visually explainable few-shot image classification. Applied Intelligence, 1–22 (2022)
[151] Jetley, S., Lord, N.A., Lee, N., Torr, P.H.: Learn to pay attention. arXiv preprint arXiv:1804.02391 (2018)
[152] Hong, J., Fang, P., Li, W., Zhang, T., Simon, C., Harandi, M., Petersson, L.: Reinforced attention for few-shot learning and beyond. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 913–923 (2021)
[153] Yuan, H., Tang, J., Hu, X., Ji, S.: Xgnn: Towards model-level explanations of graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 430–438 (2020)
[154] Moura, L.V.d., Mattjie, C., Dartora, C.M., Barros, R.C., Marques da Silva, A.M.: Explainable machine learning for covid-19 pneumonia classification with texture-based features extraction in chest radiography. Frontiers in digital health 3, 662343 (2022)
[155] Cheng, H., Zhou, J.T., Tay, W.P., Wen, B.: Attentive graph neural networks for few-shot learning. In: 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 152–157 (2022). IEEE
[156] Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Grad-cam: Why did you say that? arXiv preprint arXiv:1611.07450 (2016)
[157] Pintelas, E., Livieris, I.E., Pintelas, P.: Explainable feature extraction and prediction framework for 3d image recognition applied to pneumonia detection. Electronics 12(12), 2663 (2023)
[158] Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence 1(5), 206–215 (2019)
[159] Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model (2015)
[160] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
[161] Agarwal, A., Beygelzimer, A., Dud´ık, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: International Conference on Machine Learning, pp. 60–69 (2018). PMLR
[162] Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning, pp. 325–333 (2013). PMLR
Authors:
(1) Gao Yu Lee, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore ([email protected]);
(2) Tanmoy Dam, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 65 Nanyang Drive, 637460, Singapore and Department of Computer Science, The University of New Orleans, New Orleans, 2000 Lakeshore Drive, LA 70148, USA ([email protected]);
(3) Md Meftahul Ferdaus, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore ([email protected]);
(4) Daniel Puiu Poenar, School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore ([email protected]);
(5) Vu N. Duong, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 65 Nanyang Drive, 637460, Singapore ([email protected]).