Hardware implementation of an asynchronous analog neural network with training based on unified cmos ip blocks
- Авторлар: Petrov M.O.1, Ryndin E.A.1, Andreeva N.V.1
-
Мекемелер:
- Saint Petersburg Electrotechnical University ETU “LETI”
- Шығарылым: Том 54, № 4 (2025)
- Беттер: 323-332
- Бөлім: NEUROMORPHIC SYSTEMS
- URL: https://vestnik-pp.samgtu.ru/0544-1269/article/view/690997
- DOI: https://doi.org/10.31857/S0544126925040066
- EDN: https://elibrary.ru/qhftpm
- ID: 690997
Дәйексөз келтіру
Аннотация
An approach to designing neuromorphic electronic devices based on convolutional neural networks with backpropagation training is presented. The approach is aimed at improving the energy efficiency and performance of autonomous systems. The developed approach is based on the use of a neural network topology compiler based on five basic CMOS blocks intended for analog implementation of all computational operations in training and inference modes. The developed crossbar arrays of functional analog CMOS blocks with digital control of the conductivity level ensure the execution of the matrix-vector multiplication operation in the convolutional and fully connected layers without using the DAC and using the ADC in the synaptic connection weight control circuits only in the training mode. The effectiveness of the approach is demonstrated by the example of the digit classification problem solved with an accuracy of 97.87 % on test data using the developed model of hardware implementation of an asynchronous analog neural network with training.
Авторлар туралы
M. Petrov
Saint Petersburg Electrotechnical University ETU “LETI”
Email: nvandr@gmail.com
St. Petersburg, Russia
E. Ryndin
Saint Petersburg Electrotechnical University ETU “LETI”
Email: nvandr@gmail.com
St. Petersburg, Russia
N. Andreeva
Saint Petersburg Electrotechnical University ETU “LETI”
Хат алмасуға жауапты Автор.
Email: nvandr@gmail.com
St. Petersburg, Russia
Әдебиет тізімі
- He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition // IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. P. 770–778.
- Zagoruyko S., Komodakis N. Wide residual networks // arXiv:1605.07146. 2016. P. 1–15.
- Voulodimos A., Doulamis N., Doulamis A., Protopapadakis E. Deep learning for computer vision: A brief review // Computational intelligence and neuroscience. 2018. V. 2018. N. 1. 7068349.
- Goyal P., Sumit P., Karan J. Deep learning for natural language processing. Apress Berkeley, CA. 2018. 277 p.
- Petrov M.O., Ryndin E.A., Andreeva N.V. Compiler for Hardware Design of Convolutional Neural Networks with Supervised Learning Based on Neuromorphic Electronic Blocks // 2024 Sixth International Conference Neurotechnologies and Neurointerfaces (CNN). 2024. P. 1–4.
- Petrov M.O., Ryndin E.A., Andreeva N.V. Automated design of deep neural networks with in-situ training architecture based on analog functional blocks // The European Physical Journal Special Topics. 2024. P. 1–14.
- Gupta I., Serb A., Khiat A., Zeitler R., Vassanelli S., Prodromakis T. Sub 100 nW volatile nano-metal-oxide memristor as synaptic-like encoder of neuronal spikes // IEEE transactions on biomedical circuits and systems. 2018. V. 12. N. 2. P. 351–359.
- Valueva M.V., Valuev G.V., Babenko M.G., Cherny`x A., Kortes-Mendosa X.M. Metod apparatnoj realizacii svertochnoj nejronnoj seti na osnove sistemy` ostatochny`x klassov // Trudy` Instituta sistemnogo programmirovaniya RAN. 2022. T. 34. № 3. S. 61–74.
- LeCun Y., Bengio Y., Hinton G. Deep learning // Nature. 2015. V. 521. N. 7553. P. 436–444.
- Goodfellow I., Bengio Y., Courville A. Regularization for deep learning // Deep learning. 2016. P. 216–261.
- Schmidhuber J. Deep learning in neural networks: An overview // Neural networks. 2015. V. 61. P. 85–117.
- TensorFlow. MNIST dataset in Keras. 2024. URL: https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist
Қосымша файлдар
