Hypernetworks learn the binary two-spiral classification task

Main Article Content

José Luis Segovia-Juárez

Abstract

This research paper explores the application of hypernetwork learning algorithm for machine learning using the binary double spiral classification task. This study experimentally demonstrates a learning rate of 100 % on the dataset with 121 points and up to 98.22 % on the binary double spiral dataset with 225 points. The findings provide insights into the benefits, usefulness, and potential of hypernetworks in solving complex nonlinear classification tasks. The hypernetwork model could be implemented in field programmable gate arrays (FPGAs). The hierarchical and distributed characteristics of hypernetwork models, coupled with the inherent parallelism of FPGAs, render a potent combination for high-speed response tasks. A unique aspect of this approach is the embodiment of biomimetic hierarchical organization principles akin to biological information processing, which proved to be highly effective in addressing complex computational tasks.

Downloads

Download data is not yet available.

Article Details

How to Cite
Segovia-Juárez, J. L. (2023). Hypernetworks learn the binary two-spiral classification task. Revista De Investigación Hatun Yachay Wasi, 2(2), 9–19. https://doi.org/10.57107/hyw.v2i2.43
Section
Artículos

References

Carpenter, G., Grossber, S., Markuzon, N., John, R., & Rosen, D. B. (1992). Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps. IEEE Transactions on Neural Networks, 3(5), 698–712. DOI: 10.1109/72.159059.

Chalup, S., & Wiklendt, L. (2007). Variations of the two-spiral task. Connection Science, 19(2), 183–199. https://doi.org/10.1080/09540090701398017.

Cruz, A., Mayer, K., & Arantes, D. (2022). RosenPy: An Open-Source Python Framework for Complex-Valued Neural Networks. SSRN. https://doi.org/10.2139/ssrn.4252610.

Dobai, R., & Sekanina, L. (2015). Low-level flexible architecture with hybrid reconfiguration for evolvable hardware. ACM Trans. Reconfigurable Technol. Syst., 8, 1–24. https://doi.org/10.1145/2700414.

Ikuta, C., Uwate, Y., & Nishio, Y. (2010, May 30 - June 2). Chaos glial network connected to multi-layer perceptron for solving two-spiral problem. [Conference]: International Symposium on Circuits and Systems (ISCAS). Paris, France https://doi.org/10.1109/iscas.2010.5537060.

Johnson, A., Liu, J., Millard, A., Karim, S., Tyrrell, A., Harkin, J., & Halli day, D. (201711-13 December). Homeostatic fault tolerance in spiking neural networks utilizing dynamic partial reconfiguration of fpgas. [Conference]. 2017 International Conference on FieldProgrammable Technology (ICFPT). Melbourne, VIC, Australia. https://doi.org/10.1109/fpt.2017.8280139.

Lang, K., & Witbrock, M. (1988). Learning to telltwo spirals apart. The 1988 Connectionist Models Summer School At: Pittsburgh, PA, 52–59. https://doi.org/10.13140/2.1.3459.2329.

Lysecky, R., Stitt, G., & Vahid, F. (2004). Warp processors. ACM Trans. Des. Autom. Electron. Syst., 11, 659–681. https://doi.org/10.1145/1142980.1142986.

Perez, N., Leung, V., Dragotti, P., & Goodman, D. (2021). Neural heterogeneity promotes robust learning. Nature Communications. https://doi.org/10.1038/s41467-021-26022-3.

Ruiz, J., Ramirez, G., & Khanna, R. (2019). Field Programmable Gate Array Applications -A Scientometric Review. Computation, 7, 63. https://doi.org/10.3390/computation7040063.

Segovia, J., Colombano, S., Flores, A., Hidalgo, D., & Mejía, M. (2019). Graph classification with the hypernetwork, a molecule interaction based evolutionary architecture. 2019 IEEE International Conference on Big Data (Big Data), 5384-5393. https://doi.org/10.1109/BigData47090.2019.9005979.

Shang, Q., Chen, L., Cui, J., & Lu, Y. (2020). Hardware evolution based on improved simulated annealing algorithm in cyclone v fpsocs. IEEE Access, 8, 64770–64782. https://doi.org/10.1109/access.2020.2984950.

Singh, S. (2001). Quantifying structural time varying changes in helical data. Neural Computing and Applications, 10(2), 148–154. https://doi.org/10.1007/s005210170006.

Wang, D., Zhou, E., Brake, J., Ruan, H., Jang, M., & Yang, C. (2015). Focusing through dynamic tissue with millisecond digital optical phase conjugation. Optica, 2, 728. https://doi.org/10.1364/optica.2.000728.

Whitley, D., Yoder, J., & Carpenter, N. (2021). Resurrecting fpga intrinsic analog evolvable hardware. The 2021 Conference on Artificial Life. https://doi.org/10.1162/isal_a_00448.

Yang, J., & Kao, C. Y. (2001). A robust evolutionary algorithm for training neural networks. Neural Computing and Applications. https://doi.org/10.1007/s521-001-8050-2.

Yao, X. & Higuchi, T. (1999). Promises and challenges of evolvable hardware. IEEE Trans. Syst. Man Cybern. C, 29, 87–97. https://doi.org/10.1109/5326.74067.