Folgen
Lei Wang
Titel
Zitiert von
Zitiert von
Jahr
Laius: An 8-bit fixed-point CNN hardware inference engine
Z Li, L Wang, S Guo, Y Deng, Q Dou, H Zhou, W Lu
2017 IEEE International Symposium on Parallel and Distributed Processing …, 2017
372017
SNEAP: a fast and efficient toolchain for mapping large-scale spiking neural network onto NoC-based neuromorphic platform
S Li, S Guo, L Zhang, Z Kang, S Wang, W Shi, L Wang, W Xu
Proceedings of the 2020 on Great Lakes Symposium on VLSI, 9-14, 2020
222020
A memristor-based spiking neural network with high scalability and learning efficiency
Z Zhao, L Qu, L Wang, Q Deng, N Li, Z Kang, S Guo, W Xu
IEEE Transactions on Circuits and Systems II: Express Briefs 67 (5), 931-935, 2020
212020
A systolic SNN inference accelerator and its co-optimized software framework
S Guo, L Wang, S Wang, Y Deng, Z Yang, S Li, Z Xie, Q Dou
Proceedings of the 2019 on Great Lakes Symposium on VLSI, 63-68, 2019
192019
In-time estimation for influence maximization in large-scale social networks
X Liu, S Li, X Liao, L Wang, Q Wu
Proceedings of the fifth workshop on social network systems, 1-6, 2012
162012
A neural architecture search based framework for liquid state machine design
S Tian, L Qu, L Wang, K Hu, N Li, W Xu
Neurocomputing 443, 174-182, 2021
132021
SIES: A novel implementation of spiking convolutional neural network inference engine on field-programmable gate array
SQ Wang, L Wang, Y Deng, ZJ Yang, SS Guo, ZY Kang, YF Guo, WX Xu
Journal of Computer Science and Technology 35, 475-489, 2020
132020
Efficient and hardware-friendly methods to implement competitive learning for spiking neural networks
L Qu, Z Zhao, L Wang, Y Wang
Neural Computing and Applications 32 (17), 13479-13490, 2020
122020
Bactran: a hardware batch normalization implementation for CNN training engine
Y Zhijie, W Lei, L Li, L Shiming, G Shasha, W Shuquan
IEEE Embedded Systems Letters 13 (1), 29-32, 2020
122020
The design of asynchronous microprocessor based on optimized NCL_X design-flow
G Jin, L Wang, Z Wang
2009 IEEE International Conference on Networking, Architecture, and Storage …, 2009
102009
Hybrid deblur net: Deep non-uniform deblurring with event camera
L Zhang, H Zhang, J Chen, L Wang
IEEE Access 8, 148075-148083, 2020
92020
Shielding STT-RAM based register files on GPUs against read disturbance
H Zhang, X Chen, N Xiao, L Wang, F Liu, W Chen, Z Chen
ACM Journal on Emerging Technologies in Computing Systems (JETC) 13 (2), 1-17, 2016
92016
Know by a handful the whole sack: efficient sampling for top-k influential user identification in large graphs
X Liu, S Li, X Liao, S Peng, L Wang, Z Kong
World Wide Web 17, 627-647, 2014
92014
A low-power and high-PSNR unified DCT/IDCT architecture based on EARC and enhanced scale factor approximation
J Zhang, W Shi, L Zhou, R Gong, L Wang, H Zhou
IEEE Access 7, 165684-165691, 2019
82019
LSMCore: A 69k-Synapse/mm2 Single-Core Digital Neuromorphic Processor for Liquid State Machine
L Wang, Z Yang, S Guo, L Qu, X Zhang, Z Kang, W Xu
IEEE Transactions on Circuits and Systems I: Regular Papers 69 (5), 1976-1989, 2022
72022
Recurrent Neural Architecture Search based on Randomness-Enhanced Tabu Algorithm
K Hu, S Tian, S Guo, N Li, L Luo, L Wang
2020 International Joint Conference on Neural Networks (IJCNN), 1-8, 2020
72020
A noise filter for dynamic vision sensors using self-adjusting threshold
S Guo, Z Kang, L Wang, L Zhang, X Chen, S Li, W Xu
arXiv preprint arXiv:2004.04079, 2020
72020
An overhead-free max-pooling method for snn
S Guo, L Wang, B Chen, Q Dou
IEEE Embedded Systems Letters 12 (1), 21-24, 2019
72019
Systolic array based accelerator and algorithm mapping for deep learning algorithms
Z Yang, L Wang, D Ding, X Zhang, Y Deng, S Li, Q Dou
Network and Parallel Computing: 15th IFIP WG 10.3 International Conference …, 2018
72018
Fixcaffe: Training cnn with low precision arithmetic operations by fixed point caffe
S Guo, L Wang, B Chen, Q Dou, Y Tang, Z Li
International Workshop on Advanced Parallel Processing Technologies, 38-50, 2017
72017
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20