Chapter 19 - Metaheuristics for optimizing weights in neural networks

Mohammed A. Awadallah, Iyad Abu-Doush, Mohammed Azmi Al-Betar, Malik Shehadeh Braik

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

16 Scopus citations

Abstract

The multilayer perceptron (MLP) neural network is the most popular feedforward neural network widely used to tackle different classification and prediction problems. The successful behavior of MLP depends on the proper configurations of its input parameters (i.e., weights and biases), which are adjusted in the learning process using a gradient-based mechanism. Two chronic problems of gradient-based mechanisms can occur: slow convergence and local optima trap. To avoid these problems, the process of the gradient-based mechanism is replaced by a recent metaheuristic swarm-based method called horse herd optimization algorithm (HOA). In this chapter, HOA serves as a training algorithm in MLP to find optimal configurations of its input parameters, thus improving its classification performance. The proposed HOA-MLP is evaluated using 15 popular classification datasets with 2–10 label classes. The proposed HOA-MLP is compared against five other comparative methods, which are bat algorithm, harmony search, flower pollination algorithm, sine cosine algorithm, and JAYA algorithm. Interestingly, the HOA-MLP is able to outperform others in 5 out of 15 datasets. Furthermore, the HOA-MLP can achieve high-quality results when compared with the comparative methods.

Original languageAmerican English
Title of host publicationComprehensive Metaheuristics
Subtitle of host publicationAlgorithms and Applications
PublisherElsevier
Pages359-377
Number of pages19
ISBN (Electronic)9780323917810
ISBN (Print)9780323972673
DOIs
StatePublished - 3 Mar 2023

Keywords

  • Feedforward neural networks
  • Horse herd optimization algorithm
  • Multilayer perceptron
  • Optimization
  • Swarm intelligence

Fingerprint

Dive into the research topics of 'Chapter 19 - Metaheuristics for optimizing weights in neural networks'. Together they form a unique fingerprint.

Cite this