DOI: 10.7763/IJCTE.2026.V18.1392
Multi-Task Deep Learning for Automated Tobacco Leaf Grading in a Controlled Environment
2. College of Computer Studies, De La Salle University, Manila, Philippines
3. HER-La Salle-Universitat Ramon Llull, Barcelona, Spain
4. Telecommunications and Informatics Technologies Research Center, Bogazici University, Istanbul, Turkey
Email: cmarzan@dmmmsu.edu.ph (C.S.M.); conrado.ruiz@salle.url.edu (C.R.J.); aranoya@gmail.com (O.A.)
*Corresponding author
Manuscript received October 31, 2025; revised November 28, 2025; accepted January 30, 2026; published April 17, 2026
Abstract—Grading tobacco leaves is crucial for ensuring fair pricing and quality control, however, the process is still largely carried out manually, resulting in a slow, subjective, and often inconsistent outcome. In this work, we present a multi-task deep learning approach designed to automate the grading of air-cured Burley tobacco leaves in controlled settings. The model is constructed with shared convolutional layers and separate task-specific branches, allowing it to predict stalk group, quality, and color at the same time, in line with the hierarchical grading system. To improve consistency, images were preprocessed using coin-based size normalization, rotation alignment, and segmentation. In our experiments, the multi-task model with EfficientNetB0 achieved an accuracy of 94.82% and significantly outperformed the multi-class and single-task baselines, while reducing both training time and inference delay. These findings suggest that multi-task learning can be a valuable and robust method for automated tobacco grading, showing gains in accuracy, speed, and scalability compared to other algorithms.
Keywords—air-cured Burley tobacco, image preprocessing, multi-task deep learning, tobacco leaf grading
Cite: Charlie S. Marzan, Conrado Ruiz Jr., and Oya Aran, "Multi-Task Deep Learning for Automated Tobacco Leaf Grading in a Controlled Environment," International Journal of Computer Theory and Engineering, vol. 18, no. 2, pp. 99-109, 2026.
Copyright © 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).