Automatic Music Generation
Automatic Music Generation
MUSIC
GENERATION
Balachandar B - 810020104011
Barath S - 810020104013
Guided by
Mr. M. PrasannaKumar
CONTENT
Name of the Project
Abstract
Introduction
Stake holders
Use case
Reference
Schedule
INTRODUCTION
• With the advancements in technology, artificial intelligence and
machine learning have made it possible to create music with the
help of computers.
• LSTM is a type of recurrent neural network that has been widely
used in various fields, including natural language processing,
speech recognition, and music generation.
• The automatic music generation system using LSTM has the
objective of producing new and original music compositions that
mimic human creativity.
• We will discuss the objectives of the automatic music generation
system using LSTM. We will also explore how this system works,
its advantages, and its potential applications in the field of music
composition and production.
OBJECTIVES
The main objectives of an automatic music generation system using Long Short-Term Memory
(LSTM) are:
The system generates new and unique musical compositions by learning from existing music data and
mimicking human creativity . It produces music in real-time and allows users to control and customize
the generated music by specifying various parameters such as tempo, key, and genre . The system
serves as a valuable tool for music composition and production, helping musicians and producers
generate new musical ideas and experiment with different styles and genres . The automatic music
generation system using LSTM aims to advance the field of music composition and production by
providing an innovative and efficient way to generate original and creative music.
STAKE HOLDERS
Music educators
Music enthusiasts
EXISTING
SYSTEM
WaveNet is the already existing system of our model which is developed by Google’s
Deepminds.
The building blocks of WaveNet are Causal Dilated 1D Convolution layers. There are some
major drawbacks :
•Input is fed into a causal 1D convolution
•The output is then fed to 2 different dilated 1D convolution layers
with sigmoid and tanh activations
•The element-wise multiplication of 2 different activation values results in a skip connection
And the element-wise addition of a skip connection and output of causal 1D results in the
residual
•WaveNet requires large amounts of high-quality audio data to train effectively, which can be a
challenge in some applications where such data may be scarce or difficult to obtain.
USECASE
REFERENCES
1.Briot, J., Hadjeres, G., & Pachet, F. (2019). Deep learning techniques for music generation: A
survey. arXiv preprint arXiv:1906.02516.
2.Liang, J., Li, M., & Yang, Z. (2019). Automatic music generation with deep learning: A review.
IEEE Access, 7, 106762-106781.
4.Xie, H., Yao, Y., & Gong, Y. (2018). A Real-Time Music Generation System with LSTM-RNN.
In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing
(ICIVC) (pp. 643-648).
5.Oore, S., Simon, I., & Dieleman, S. (2018). Real-time polyphonic music generation with
recurrent neural networks. Journal of Creative Music Systems, 2(1), 1-18.
GANTT
CHART
GRAPHICAL ABSTRACT
HARDWARE REQUIREMENTS:
● A computer with at least 8GB of RAM
● A modern CPU (e.g. Intel Core i3 or equivalent)
● Sufficient storage space for the data and any
necessary software
However, for optimal performance and to ensure that the algorithm runs smoothly, it is
recommended to have a computer with higher specifications, such as:
Balachandar B
810020104011
Role: Data analysis, Model training, Module analysis, Model fitting
DATA FLOW
DIAGRAM
1.1
1.0
1.2
THANK YOU