Folgen
Hanmin Park
Hanmin Park
Postdoctoral Researcher, Neural Processing Research Center, Seoul National University
Bestätigte E-Mail-Adresse bei dal.snu.ac.kr
Titel
Zitiert von
Zitiert von
Jahr
GradPIM: A practical processing-in-DRAM architecture for gradient descent
H Kim, H Park, T Kim, K Cho, E Lee, S Ryu, HJ Lee, K Choi, J Lee
2021 IEEE International Symposium on High-Performance Computer Architecture …, 2021
292021
Position-based weighted round-robin arbitration for equality of service in many-core network-on-chips
H Park, K Choi
Proceedings of the Fifth International Workshop on Network on Chip …, 2012
162012
Adaptively weighted round‐robin arbitration for equality of service in a many‐core network‐on‐chip
H Park, K Choi
IET Computers & Digital Techniques 10 (1), 37-44, 2016
152016
Acceleration of DNN backward propagation by selective computation of gradients
G Lee, H Park, N Kim, J Yu, S Jo, K Choi
Proceedings of the 56th Annual Design Automation Conference 2019, 1-6, 2019
142019
Aging compensation with dynamic computation approximation
H Kim, J Kim, H Amrouch, J Henkel, A Gerstlauer, K Choi, H Park
IEEE Transactions on Circuits and Systems I: Regular Papers 67 (4), 1319-1332, 2020
122020
Training neural networks with low precision dynamic fixed-point
S Jo, H Park, G Lee, K Choi
2018 IEEE 36th International Conference on Computer Design (ICCD), 405-408, 2018
72018
Cell division: weight bit-width reduction technique for convolutional neural network hardware accelerators
H Park, K Choi
Proceedings of the 24th Asia and South Pacific Design Automation Conference …, 2019
52019
ComPreEND: Computation pruning through predictive early negative detection for ReLU in a deep neural network accelerator
N Kim, H Park, D Lee, S Kang, J Lee, K Choi
IEEE Transactions on Computers 71 (7), 1537-1550, 2021
32021
Leakage power reduction of functional units in processors having zero-overhead loop counter
H Park, JK Paek, J Lee, K Choi
2009 International SoC Design Conference (ISOCC), 492-495, 2009
32009
Method of accelerating training process of neural network and neural network device thereof
S Lee, P Hanmin, LEE Gunhee, N Kim, YU Joonsang, C Kiyoung
US Patent App. 16/550,498, 2020
22020
Acceleration of dnn training regularization: Dropout accelerator
G Lee, H Park, S Ryu, HJ Lee
2020 International Conference on Electronics, Information, and Communication …, 2020
22020
Quantum-Evolutionary Algorithm 을 이용한 Bit-Parallel 가변 길이 복호기의 Symbol Partitioning 최적화 방법
박한민, 최기영
대한전자공학회 학술대회, 31-34, 2011
22011
GPU 를 이용한 Quantum-Inspired Evolutionary Algorithm 가속
류지현, 박한민, 최기영
전자공학회논문지-SD 49 (8), 1-9, 2012
12012
Zero-Overhead Loop 카운터를 이용한 저전력 프로세서 설계
박한민, 최기영
대한전자공학회 학술대회, 455-456, 2008
12008
Method and apparatus with data processing
S Lee, N Kim, P Hanmin, C Kiyoung
US Patent App. 17/689,454, 2022
2022
Method and apparatus with data processing
S Lee, N Kim, P Hanmin, C Kiyoung
US Patent 11,301,209, 2022
2022
Reconfigurable Communication Architecture in Chip Multiprocessors for Equality-of-Service and High Performance
박한민
서울대학교 대학원, 2016
2016
Zero-Overhead Loop 카운터와 Drowsy Instruction Cache 를 이용한 저전력 프로세서 설계
박한민, 김영찬, 최기영
대한전자공학회 학술대회, 67-68, 2010
2010
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–18