Optimizing Training Time in Deep Learning Models Using Distributed Computing Techniques

Authors

  • Nasiba M. Abdulkarim Information Technology Department, Duhok Polytechnic University, Duhok-KRG, 42001, Iraq.
  • Lozan M. Abdulrahman Information Technology Department, Duhok Polytechnic University, Duhok-KRG, 42001, Iraq.
  • Taha A. Ababakar Computer Science Department, University of Zakho, Duhok-KRG, 42001, Iraq.
  • Omar M. Ahmed Computer Information System Department, Duhok Polytechnic University, Duhok-KRG, 42001, Iraq.

DOI:

https://doi.org/10.55145/ajest.2026.05.01.009

Keywords:

Distributed Deep Learning, Convolutional Neural Networks, Data Parallelism, Model Scalability, Computational Efficiency

Abstract

Training deep learning models is often a time-consuming process, especially when hardware resources are tight. This bottleneck slows down experimentation and increases development costs. The primary goal of this study was to investigate whether distributed computing could cut down on training time while keeping model accuracy intact. To test this, we trained a Convolutional Neural Network (CNN) on the CIFAR-10 dataset using two different configurations: a standard single-device setup and a distributed synchronous setup powered by TensorFlow’s MirroredStrategy. To ensure a fair comparison, both trials used the exact same architecture, hyperparameters, and preprocessing. The results were clear: distributed training reduced the total time by about 19.5%, with validation accuracy remaining nearly identical to the single-device approach. These findings suggest that distributed training is a practical way to accelerate deep learning workflows without sacrificing performance, serving as an efficient solution even in environments with limited hardware.

Downloads

Published

2026-02-06

How to Cite

Abdulkarim, N. M., Abdulrahman, L. M., Ababakar, T. A., & Ahmed, O. M. (2026). Optimizing Training Time in Deep Learning Models Using Distributed Computing Techniques . Al-Salam Journal for Engineering and Technology, 5(1), 100–109. https://doi.org/10.55145/ajest.2026.05.01.009

Issue

Section

Articles