Folgen
Shingo Kuroiwa
Titel
Zitiert von
Zitiert von
Jahr
Reverberant speech recognition based on denoising autoencoder.
T Ishii, H Komiyama, T Shinozaki, Y Horiuchi, S Kuroiwa
Interspeech, 3512-3516, 2013
1362013
AURORA-2J: An evaluation framework for Japanese noisy speech recognition
S Nakamura, K Takeda, K Yamamoto, T Yamada, S Kuroiwa, N Kitaoka, ...
IEICE transactions on information and systems 88 (3), 535-544, 2005
1052005
Dimensionality reduction using non-negative matrix factorization for information retrieval
S Tsuge, M Shishibori, S Kuroiwa, K Kita
2001 IEEE International Conference on Systems, Man and Cybernetics. e …, 2001
1012001
Design of class-E amplifier with MOSFET linear gate-to-drain and nonlinear drain-to-source capacitances
X Wei, H Sekiya, S Kuroiwa, T Suetsugu, MK Kazimierczuk
IEEE Transactions on Circuits and Systems I: Regular Papers 58 (10), 2556-2565, 2011
872011
Category classification and topic discovery of japanese and english news articles
DB Bracewell, J Yan, F Ren, S Kuroiwa
Electronic Notes in Theoretical Computer Science 225, 51-65, 2009
682009
The creation of a Chinese emotion ontology based on HowNet.
J Yan, DB Bracewell, F Ren, S Kuroiwa
Engineering Letters 16 (1), 2008
602008
CENSREC-1-AV: An audio-visual corpus for noisy bimodal speech recognition
S Tamura, C Miyajima, N Kitaoka, T Yamada, S Tsuge, T Takiguchi, ...
Training 720, 480, 2010
532010
Sign language recognition based on position and movement using multi-stream HMM
M Maebatake, I Suzuki, M Nishida, Y Horiuchi, S Kuroiwa
2008 Second International Symposium on Universal Communication, 478-481, 2008
532008
Retracted: Recognition of emotion with SVMs
Z Teng, F Ren, S Kuroiwa
Computational Intelligence: International Conference on Intelligent …, 2006
422006
CENSREC-1-C: An evaluation framework for voice activity detection under noisy environments
N Kitaoka, T Yamada, S Tsuge, C Miyajima, K Yamamoto, T Nishiura, ...
Acoustical Science and Technology 30 (5), 363-371, 2009
412009
CENSREC-4: development of evaluation framework for distant-talking speech recognition under reverberant environments.
M Nakayama, T Nishiura, Y Denda, N Kitaoka, K Yamamoto, T Yamada, ...
INTERSPEECH, 968-971, 2008
412008
Development of VAD evaluation framework CENSREC-1-C and investigation of relationship between VAD and speech recognition performance
N Kitaoka, K Yamamoto, T Kusamizu, S Nakagawa, T Yamada, S Tsuge, ...
2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU …, 2007
412007
Nonverbal voice emotion analysis system
S Mitsuyoshi
International journal of innovative computing, Information and control 12 (4 …, 2006
412006
Data collection and evaluation of AURORA-2 Japanese corpus [speech recognition applications]
S Nakamura, K Yamamoto, K Takeda, S Kuroiwa, N Kitaoka, T Yamada, ...
2003 IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE …, 2003
392003
Speech endpoint detection method and apparatus and continuous speech recognition method and apparatus
M Naito, S Kuroiwa, K Takeda, S Yamamoto
US Patent 5,740,318, 1998
351998
Estimating human emotions using wording and sentence patterns
K Matsumoto, J Minato, F Ren, S Kuroiwa
2005 IEEE International Conference on Information Acquisition, 6 pp., 2005
322005
Wind noise reduction method for speech recording using multiple noise templates and observed spectrum fine structure
S Kuroiwa, Y Mori, S Tsuge, M Takashina, F Ren
2006 International Conference on Communication Technology, 1-5, 2006
312006
Sentence alignment using P-NNT and GMM
MA Fattah, DB Bracewell, F Ren, S Kuroiwa
Computer Speech & Language 21 (4), 594-608, 2007
292007
Semi-automatic emotion recognition from textual input based on the constructed emotion thesaurus
Y Zhang, Z Li, F Ren, S Kuroiwa
2005 International Conference on Natural Language Processing and Knowledge …, 2005
272005
Missing feature theory applied to robust speech recognition over IP network
T Endo, S Kuroiwa, S Nakamura
IEICE transactions on information and systems 87 (5), 1119-1126, 2004
272004
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20