Folgen
Toshihiko Yamasaki
Toshihiko Yamasaki
Department of Information and Communication Engineering, The University of Tokyo
Bestätigte E-Mail-Adresse bei cvm.t.u-tokyo.ac.jp
Titel
Zitiert von
Zitiert von
Jahr
Sketch-based manga retrieval using manga109 dataset
Y Matsui, K Ito, Y Aramaki, A Fujimoto, T Ogawa, T Yamasaki, K Aizawa
Multimedia tools and applications 76, 21811-21838, 2017
11442017
Joint optimization framework for learning with noisy labels
D Tanaka, D Ikami, T Yamasaki, K Aizawa
Proceedings of the IEEE conference on computer vision and pattern …, 2018
7852018
Cross-domain weakly-supervised object detection through progressive domain adaptation
N Inoue, R Furuta, T Yamasaki, K Aizawa
Proceedings of the IEEE conference on computer vision and pattern …, 2018
5852018
Detecting deepfakes with self-blended images
K Shiohara, T Yamasaki
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
2052022
Efficient retrieval of life log based on context and content
K Aizawa, D Tancharoen, S Kawasaki, T Yamasaki
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval …, 2004
1852004
Manga109 dataset and creation of metadata
A Fujimoto, T Ogawa, K Yamamoto, Y Matsui, T Yamasaki, K Aizawa
Proceedings of the 1st international workshop on comics analysis, processing …, 2016
1582016
Foodlog: Capture, analysis and retrieval of personal food images via web
K Kitamura, T Yamasaki, K Aizawa
Proceedings of the ACM multimedia 2009 workshop on Multimedia for cooking …, 2009
1222009
Self-supervised video representation learning using inter-intra contrastive framework
L Tao, X Wang, T Yamasaki
Proceedings of the 28th ACM international conference on multimedia, 2193-2201, 2020
982020
Food log by analyzing food images
K Kitamura, T Yamasaki, K Aizawa
Proceedings of the 16th ACM international conference on Multimedia, 999-1000, 2008
982008
PixelRL: Fully convolutional network with reinforcement learning for image processing
R Furuta, N Inoue, T Yamasaki
IEEE Transactions on Multimedia 22 (7), 1704-1719, 2019
942019
Affective audio-visual words and latent topic driving model for realizing movie affective scene classification
G Irie, T Satou, A Kojima, T Yamasaki, K Aizawa
IEEE Transactions on Multimedia 12 (6), 523-535, 2010
942010
Practical experience recording and indexing of life log video
D Tancharoen, T Yamasaki, K Aizawa
Proceedings of the 2nd ACM workshop on Continuous archival and retrieval of …, 2005
932005
Mask-SLAM: Robust feature-based monocular SLAM by masking using semantic segmentation
M Kaneko, K Iwami, T Ogawa, T Yamasaki, K Aizawa
Proceedings of the IEEE conference on computer vision and pattern …, 2018
902018
Image-based indoor positioning system: fast image matching using omnidirectional panoramic images
H Kawaji, K Hatada, T Yamasaki, K Aizawa
Proceedings of the 1st ACM international workshop on Multimodal pervasive …, 2010
882010
Efficient optimization of convolutional neural networks using particle swarm optimization
T Yamasaki, T Honma, K Aizawa
2017 IEEE third international conference on multimedia big data (BigMM), 70-73, 2017
802017
Object detection for comics using manga109 annotations
T Ogawa, A Otsubo, R Narita, Y Matsui, T Yamasaki, K Aizawa
arXiv preprint arXiv:1803.08670, 2018
792018
Analog soft-pattern-matching classifier using floating-gate MOS technology
T Yamasaki, T Shibata
IEEE Transactions on Neural Networks 14 (5), 1257-1265, 2003
772003
Multi-label fashion image classification with minimal human supervision
N Inoue, E Simo-Serra, T Yamasaki, H Ishikawa
proceedings of the IEEE international conference on computer vision …, 2017
712017
Unpaired image enhancement featuring reinforcement-learning-controlled image editing software
S Kosugi, T Yamasaki
Proceedings of the AAAI conference on artificial intelligence 34 (07), 11296 …, 2020
682020
Learning from synthetic shadows for shadow detection and removal
N Inoue, T Yamasaki
IEEE Transactions on Circuits and Systems for Video Technology 31 (11), 4187 …, 2020
672020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20