Text Generation untuk Profil Mata Kuliah pada Penilaian Outcome-Based Education Menggunakan Text-to-Text Transfer Transformers
Abstract
The evaluation of Course Learning Outcomes (CPMK) in Outcome-Based Education (OBE) is still conducted manually, making it time-consuming and prone to errors. Additionally, the achievement profile of CPMK is often overlooked. This study aims to automate the generation of course profiles based on CPMK using Text Generation technology. The method employed is Transformers with the T5 (Text-to-Text Transfer Transformer) algorithm. Experiments were conducted using three variants of the T5 model: T5 Base, T5 Base with fine-tuning, and T5 XL, evaluated using the Bilingual Evaluation Understudy (BLEU) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics. The results show that T5 XL achieved the best performance, with an average BLEU score of 0.592 and a ROUGE-L score of 0.721. T5 Base with fine-tuning recorded a BLEU score of 0.417 and a ROUGE-L score of 0.468, while T5 Base without fine-tuning had a BLEU score of 0.327 and a ROUGE-L score of 0.246. Additionally, more structured prompts yielded better evaluation results. This study demonstrates that T5 XL enhances the efficiency and accuracy of CPMK evaluation in OBE.
Keywords: Outcome Based Education; Text Generation; Text-To-Text Transfer Transformers; Penilaian
Abstrak
Evaluasi capaian pembelajaran mata kuliah (CPMK) dalam Outcome-Based Education (OBE) masih dilakukan secara manual, memakan waktu, dan rentan terhadap kesalahan. Selain itu, profil pencapaian CPMK sering diabaikan. Penelitian ini bertujuan mengotomasi pembuatan profil mata kuliah berbasis CPMK menggunakan teknologi Text Generation. Metode yang digunakan adalah Transformers dengan algoritma T5 (Text-to-Text Transfer Transformers). Eksperimen dilakukan dengan tiga varian model T5: T5 Base, T5 Base dengan fine-tuning, dan T5 XL, dievaluasi menggunakan metrik Bilingual Evaluation Understudy (BLEU) dan Recall-Oriented Understudy for Gisting Evaluation (ROUGE). Hasil menunjukkan T5 XL memiliki performa terbaik dengan BLEU rata-rata 0,592 dan ROUGE-L 0,721. T5 Base dengan fine-tuning mencatat BLEU 0,417 dan ROUGE-L 0,468, sedangkan T5 Base tanpa fine-tuning memiliki BLEU 0,327 dan ROUGE-L 0,246. Selain itu, prompt yang lebih terstruktur menghasilkan evaluasi lebih baik. Penelitian ini membuktikan bahwa T5 XL meningkatkan efisiensi dan akurasi evaluasi CPMK dalam OBE.
Keywords
References
Z. D. Rahmawati and S. Wahyuni, “Pengembangan Kurikulum Pendidikan Islam Multikultural Berbasis Outcome Based Education (OBE),” TA’LIM: Jurnal Studi Pendidikan Islam, vol. 7, no. 2, pp. 218-236, Jul.2024. doi: 10.52166/talim.v7i2.6895.
A. Aminuddin, R. Salambue, Y. Andriyani, E. Mahdiyah, P. Studi Sistem Informasi, and U. Riau, “Aplikasi E-OBE untuk Integrasi Komponen KurikulumOBE (Outcome Based Education),” JSI : Jurnal Sistem Informasi (E-Journal), vol. 13, no. 1, pp. 2168-2182, Aug. 2021, doi: 10.18495/jsi.v13i1.34.
M. I. Muzakir and Susanto, “Implementasi Kurikulum Outcome Based Education (Obe) Dalam Sistem Pendidikan Tinggi Di Era Revolusi Industri 4.0”, edukasiana, vol. 2, no. 1, pp. 118-139, May 2023. doi: 10.61159/edukasiana.v2i1.86.
M. I. Rarmli, M. A. T. ., and M. W. T. ., “Pelatihan Metode Pengukuran Capaian Pembelajaran Kurikulum Prodi Teknik Sipil Berbasis Outcome Based Education (OBE) pada Anggota BMPTTSSI”, Jurnal_Tepat, vol. 5, no. 1, pp. 118-126, Jun. 2022. doi: 10.25042/jurnal_tepat.v5i1.226.
K. Henra, N. Q. Tayibu, and I. N. Masliah, “Pengaruh Pembelajaran Daring Asynchronous Terhadap Tingkat Pemenuhan CPMK Statistika,” JIPM (Jurnal Ilmiah Pendidikan Matematika), vol. 10, no. 1, p. 100-110, Aug. 2021, doi: 10.25273/jipm.v10i1.8537.
M. H. Zikry, A. B. Prasetijo, and R. Septiana, “Implementasi Front-end Sistem Penilaian Capaian Pembelajaran Lulusan (CPL) dan Capaian Pembelajaran Mata Kuliah (CPMK) (Studi Kasus pada Teknik Komputer Undip) Front-end Implementation of Program Learning Outcome (PLO) and Course Learning Outcome (CLO) (Case Study at Computer Engineering Undip),” Jurnal Teknik Komputer, vol. 3, no. 1, pp. 38–47, doi: 10.14710/jtk.v3i1.44282.
R. Putra, R. Ilyas, and F. Kasyidi, “Pembangkitan Kalimat Ilmiah Menggunakan Recurrent Neural Network”, SisInfo, vol. 3, no. 1, pp. 11-20, Feb. 2021. doi: 10.37278/sisinfo.
Loviera Azahra, Qoniah Milladunka Nurhayati, Nuriya Parsa, Albany Indra Hafidz, and Ahmad Fu’adin, “Alat Bantu Baca Tunanetra Berbasis Teknologi Text-To-Speech Dan Optical Character Recognition”, Kohesi, vol. 3, no. 2, pp. 1–10, Apr. 2024. doi: 10.3785/kohesi.v3i2.2899.
L. Gong, J. Crego, and J. Senellart, “Enhanced Transformers Model for Data-to-Text Generation.” In Proceedings of the 3rd Workshop on Neural Generation and Translation, pp. 148–156, Nov.2019 doi: 10.18653/v1/D19-5615.
C. Alifia Putri and S. Al Faraby, “Classification of Sentiment Analysis on English Film Reviews with Analisis Sentimen Review Film Berbahasa Inggris Dengan Pendekatan Bidirectional Encoder Representations from Transformers approval. JATISI (Jurnal Teknik Informatika Dan Sistem Informasi), vol. 6, no. 2, pp. 181–193, Jan.2020, doi: 10.35957/jatisi.v6i2.206.
A. R. Subagyo and T. B. Sasongko, “Implementasi Algoritma Transformers BART dan Penggunaan Metode Optimasi Adam Untuk Klasifikasi Judul Berita Palsu,” Jurnal Media Informatika Budidarma, vol. 8, no. 3, p. 1768-1777, Jul. 2024, doi: 10.30865/mib.v8i3.7852.
S. Bature, A. Olorunleke, O. Ibrahim, A. Kayode, and J. Bolakale, “Assessing Agricultural Literacy Among Senior Secondary School Students in Kwara State, Nigeria: Implications for Educational Interventions”, IJCETS, vol. 11, no. 1, pp. 1-8, Apr. 2023. doi: 10.15294/ijcets.v11i1.54877.
I Nyoman Purnama and Ni Nengah Widya Utami, “Implementasi Peringkas Dokumen Berbahasa Indonesia Menggunakan Metode Text To Text Transfer Transformers (T5)”, JUTIK, vol. 9, no. 4, pp. 381-391, Aug. 2023. doi: 10.36002/jutik.v9i4.2531.
Q. A. Itsnaini, M. Hayaty, A. D. Putra, and N. A. M. Jabari, “Abstractive Text Summarization using Pre-Trained Language Model ‘Text-to-Text Transfer Transformers (T5),’” ILKOM Jurnal Ilmiah, vol. 15, no. 1, pp. 124–131, Apr. 2023, doi: 10.33096/ilkom.v15i1.1532.124-131.
J. Kaplan et al., “Scaling Laws for Neural Language Models” Jan. 2020, doi: 10.48550/arXiv.2001.08361.
I. Borhan and A. Bajaj, “The Effect of Prompt Types on Text Summarization Performance With Large Language Models,” Journal of Database Management, vol. 35, no. 1, pp. 1-23, Jan.2024, doi: 10.4018/JDM.358475.
Z. Y. Zhang, A. Verma, F. Doshi-Velez, and B. K. H. Low, “Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models” Jul. 2024, doi: 10.48550/arXiv.2407.14845.
S. S. Hartinah and Sugiyono, “Pemodelan Data Mining Transaksi Penjualan Menggunakan Algoritma Apriori (Studi Kasus: Kedai Ngodeng & Smoothies)”, jimik, vol. 5, no. 3, pp. 3080-3098, Sep. 2024. doi: 10.35870/jimik.v5i3.992
D. H. Putra and N. A. A. N. Syam, “Automasi Pembangkit Lirik Lagu Dalam Bahasa Indonesia: Implementasi GPT-NEO Dalam Pemrosesan Bahasa Kreatif”, Syntax: Journal of Software Engineering, Computer Science and Information Technology, vol. 5, no. 2, pp. 612-616, Des.2024. doi: 10.46576/syntax.v5i2.5542.
Y. Yuniati, K. Milani Fitria, S. Purwiyanti, E. Nasrullah, M. A. Muhammad, and P. Korespondensi “Analisis Performa Ekstraksi Konten GPT-3 Dengan Matrik Bertscore Dan Rouge”, JTIIK, vol. 11, no. 6, pp. 1273–1280, Dec. 2024, doi: 10.25126/jtiik.1168088.
M. Guevara et al., “Large language models to identify social determinants of health in electronic health records,” NPJ Digit Med, vol. 7, no. 1, p. 1-37 Dec. 2024, doi: 10.1038/s41746-023-00970-0.
C. Raffel et al., “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformers”, The Journal of Machine Learning Research, vol. 21, No.140, p. 5485 - 5551, Jan. 2020, doi: 10.48550/arXiv.1910.10683.
How To Cite This :
Refbacks
- There are currently no refbacks.