YAMNet is a pretrained deep net that predicts 521 audio event classes based on the AudioSet-YouTube corpus, and employing the Mobilenet_v1 depthwise-separable convolution architecture.
This directory contains the Keras code to construct the model, and example code for applying the model to input sound files.
The yamnet more likely to recognize than vggish+youtube8m. but how we can train YAMNet at first.
--
You received this message because you are subscribed to the Google Groups "audioset-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To view this discussion on the web visit https://groups.google.com/d/msgid/audioset-users/ce488727-0727-47e7-977e-f0233df90ded%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "audioset-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To view this discussion on the web visit https://groups.google.com/d/msgid/audioset-users/cf3b7d0b-1a37-4e2e-adfe-c5c295e79a5a%40googlegroups.com.