Abstract
The multilayer perceptron (MLP) neural network is the most popular feedforward neural network widely used to tackle different classification and prediction problems. The successful behavior of MLP depends on the proper configurations of its input parameters (i.e., weights and biases), which are adjusted in the learning process using a gradient-based mechanism. Two chronic problems of gradient-based mechanisms can occur: slow convergence and local optima trap. To avoid these problems, the process of the gradient-based mechanism is replaced by a recent metaheuristic swarm-based method called horse herd optimization algorithm (HOA). In this chapter, HOA serves as a training algorithm in MLP to find optimal configurations of its input parameters, thus improving its classification performance. The proposed HOA-MLP is evaluated using 15 popular classification datasets with 2–10 label classes. The proposed HOA-MLP is compared against five other comparative methods, which are bat algorithm, harmony search, flower pollination algorithm, sine cosine algorithm, and JAYA algorithm. Interestingly, the HOA-MLP is able to outperform others in 5 out of 15 datasets. Furthermore, the HOA-MLP can achieve high-quality results when compared with the comparative methods.
| Original language | American English |
|---|---|
| Title of host publication | Comprehensive Metaheuristics |
| Subtitle of host publication | Algorithms and Applications |
| Publisher | Elsevier |
| Pages | 359-377 |
| Number of pages | 19 |
| ISBN (Electronic) | 9780323917810 |
| ISBN (Print) | 9780323972673 |
| DOIs | |
| State | Published - 3 Mar 2023 |
Keywords
- Feedforward neural networks
- Horse herd optimization algorithm
- Multilayer perceptron
- Optimization
- Swarm intelligence