Segmentasi Citra Dermoskopi Kanker Kulit Menggunakan Metode VGG-SegNet
Abstract
Skin cancer, particularly melanoma, has a high mortality rate, necessitating reliable early detection based on dermoscopic images. Accurate lesion segmentation is a crucial preprocessing step prior to deep learning–based classification. However, challenges remain in preserving lesion boundary details, handling variations in lesion size, and addressing limited data availability and hyperparameter optimization. This study proposes and evaluates various configurations of a hybrid VGG–SegNet model for skin lesion segmentation using the ISIC 2017 and ISIC 2018 datasets. The methodology includes image and ground truth validation, VGG16-based normalization, input image resolution variations (128×128, 160×160, and 256×256), and training–validation data splits of 50:50 and 70:30. The model is fine-tuned using skip connections and a multi-stage training scheme with a combination of Binary Cross-Entropy, Dice, and Focal Tversky Loss. The best-performing model achieves a Dice Coefficient of 0.899 and an Intersection over Union of 0.8177 on the validation set, demonstrating precise and efficient lesion segmentation.
Keywords: Skin Cancer Segmentation; VGG-SegNet; ISIC; Fine Tuning
Abstrak
Kanker kulit, khususnya melanoma, memiliki tingkat mortalitas tinggi sehingga memerlukan deteksi dini berbasis citra dermoskopi yang andal. Segmentasi lesi yang akurat merupakan tahap penting sebelum proses klasifikasi berbasis deep learning. Namun, tantangan masih muncul pada detail tepi lesi, variasi ukuran, serta keterbatasan data dan optimasi hyperparameter. Penelitian ini mengusulkan dan mengevaluasi berbagai konfigurasi model hibrida VGG–SegNet untuk segmentasi lesi kulit menggunakan dataset ISIC 2017 dan ISIC 2018. Tahapan meliputi validasi citra dan ground truth, normalisasi berbasis VGG16, variasi resolusi citra (128×128, 160×160, 256×256), serta pembagian data latih dan validasi dengan rasio 50:50 dan 70:30. Model di fine tuning menggunakan skip connection dan skema pelatihan bertahap dengan kombinasi Binary Cross-Entropy, Dice, dan Focal Tversky Loss. Model terbaik mencapai Dice Coefficient 0,899 dan IoU 0,8177 pada data validasi, menunjukkan segmentasi yang presisi dan efisien.
Kata kunci: Segmentasi Kanker Kulit; VGG–SegNet; ISIC; Fine TuningReferences
S. L. Menaldi, “Ilmu Penyakit Kulit dan Kelamin. Edisi ketujuh,” 7th ed., Jakarta: Badan Penerbit FKUI, 2016.
N. Wedayani, N. A. Putri R, and D. Hidajat, “Edukasi Tentang Pengenalan Tanda Gejala, Pencegahan dan Penanganan Kanker Kulit Sebagai Dampak Paparan Sinar Matahari dan Penggunaan Kosmetik Berbahan Kimia Berbahaya di Poli Kulit Rumah Sakit Akademik Universitas Mataram,” Jurnal Pengabdian Magister Pendidikan IPA, vol. 5, no. 3, pp. 223–226, Sep. 2022, doi: 10.29303/jpmpi.v5i3.2133.
A. Ahmed, G. Sun, A. Bilal, Y. Li, and S. A. Ebad, “Precision and efficiency in skin cancer segmentation through a dual encoder deep learning model,” Sci. Rep., vol. 15, no. 1, pp. 1–16, Feb. 2025, doi: 10.1038/s41598-025-88753-3.
P. Bansal, R. Garg, and P. Soni, “Detection of melanoma in dermoscopic images by integrating features extracted using handcrafted and deep learning models,” Comput. Ind. Eng., vol. 168, no. 3, pp. 1–15, Jun. 2022, doi: 10.1016/j.cie.2022.108060.
O. Akinrinade and C. Du, “Skin cancer detection using deep machine learning techniques,” Intell. Based. Med., vol. 11, no. 21, pp. 1–16, Jan. 2025, doi: 10.1016/j.ibmed.2024.100191.
R. Agrawal, N. Gupta, and A. S. Jalal, “CACBL-Net: a lightweight skin cancer detection system for portable diagnostic devices using deep learning based channel attention and adaptive class balanced focal loss function,” Multimed. Tools Appl., vol. 84, no. 14, pp. 12877–12900, Jun. 2024, doi: 10.1007/s11042-024-19485-1.
A. R. Priambodo and C. Fatichah, “Leveraging Convolutional Block Attention Module (Cbam) For Enhanced Performance In Mobilenetv3-Based Skin Cancer Classification,” Jurnal Teknik Informatika (Jutif), vol. 6, no. 3, pp. 1389–1404, Jun. 2025, doi: 10.52436/1.jutif.2025.6.3.4546.
A. Tawakuli, B. Havers, V. Gulisano, D. Kaiser, and T. Engel, “Survey:Time-series data preprocessing: A survey and an empirical analysis,” Journal of Engineering Research (Kuwait), vol. 13, no. 2, pp. 674–711, Jun. 2025, doi: 10.1016/j.jer.2024.02.018.
Z. Mirikharaji et al., “A survey on deep learning for skin lesion segmentation,” Med. Image Anal., vol. 88, no. 102863, pp. 1–40, Aug. 2023, doi: 10.1016/j.media.2023.102863.
D. Cheng et al., “EA-Net: Research on skin lesion segmentation method based on U-Net,” Heliyon, vol. 9, no. 12, pp. 1–12, Dec. 2023, doi: 10.1016/j.heliyon.2023.e22663.
H. Sharen, M. Jawahar, L. Jani Anbarasi, V. Ravi, N. Saleh Alghamdi, and W. Suliman, “FDUM-Net: An enhanced FPN and U-Net architecture for skin lesion segmentation,” Biomed. Signal Process. Control, vol. 91, no. 4, pp. 1–15, May 2024, doi: 10.1016/j.bspc.2024.106037.
Z. Liu, J. Hu, X. Gong, and F. Li, “Skin lesion segmentation with a multiscale input fusion U-Net incorporating Res2-SE and pyramid dilated convolution,” Sci. Rep., vol. 15, no. 1, pp. 1–19, Dec. 2025, doi: 10.1038/s41598-025-92447-1.
P. T. Le et al., “Anti-Aliasing Attention U-net Model for Skin Lesion Segmentation,” Diagnostics, vol. 13, no. 8, pp. 1–15, Apr. 2023, doi: 10.3390/diagnostics13081460.
C. Yuan, D. Zhao, and S. S. Agaian, “MUCM-Net: a Mamba powered UCM-Net for skin lesion segmentation,” Explor. Med., vol. 5, pp. 694–708, 2024, doi: 10.37349/emed.2024.00250.
Y. Sun, D. Dai, Q. Zhang, Y. Wang, S. Xu, and C. Lian, “MSCA-Net: Multi-scale contextual attention network for skin lesion segmentation,” Pattern Recognit., vol. 139, no. 6, pp. 1–14, Jul. 2023, doi: 10.1016/j.patcog.2023.109524.
H. Wu, S. Chen, G. Chen, W. Wang, B. Lei, and Z. Wen, “FAT-Net: Feature adaptive transformers for automated skin lesion segmentation,” Med. Image Anal., vol. 76, no. 9, pp. 1–14, Feb. 2022, doi: 10.1016/j.media.2021.102327.
X. Tong, J. Wei, B. Sun, S. Su, Z. Zuo, and P. Wu, “Ascu-net: Attention gate, spatial and channel attention u-net for skin lesion segmentation,” Diagnostics, vol. 11, no. 3, pp. 1–18, Mar. 2021, doi: 10.3390/diagnostics11030501.
E. R. Jefferson and E. Trucco, “The challenges of assembling, maintaining and making available large data sets of clinical data for research,” in Computational Retinal Image Analysis, 1st ed., Trucco Emanuele, MacGillivray Tom, and Xu Yanwu, Eds., Oxford: Academic Press (Elsevier), 2019, ch. 20, pp. 429–444. doi: 10.1016/B978-0-08-102816-2.00021-6.
O. Rainio and R. Klén, “Modified Dice Coefficients for Evaluation of Tumor Segmentation from PET Images: A Proof-of-Concept Study,” Journal of Imaging Informatics in Medicine, vol. 39, no. 2, pp. 785–793, May 2025, doi: 10.1007/s10278-025-01535-1.
M. E. Rayed, S. M. S. Islam, S. I. Niha, J. R. Jim, M. M. Kabir, and M. F. Mridha, “Deep learning for medical image segmentation: State-of-the-art advancements and challenges,” Inform. Med. Unlocked, vol. 47, no. 7, pp. 1–25, Jan. 2024, doi: 10.1016/j.imu.2024.101504.
How To Cite This :
Refbacks
- There are currently no refbacks.









