Skip to content

Commit b13628e

Browse files
author
faradox
committed
rewrite to use nendo
1 parent ed3c969 commit b13628e

9 files changed

Lines changed: 619 additions & 804 deletions

File tree

.gitignore

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,11 @@
11
input/
2+
nendo_library/
23
library/
34
processed/
4-
separated/
5+
separated/
6+
polymath_library
7+
polymath_input
8+
polymath_output
9+
.python-version
10+
polymath.egg-info
11+
__pycache__

Dockerfile

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,11 @@
11
FROM python:3.10-bullseye
22

33
RUN apt update
4-
RUN apt install -y rubberband-cli make automake gcc g++ python3-dev gfortran build-essential wget libsndfile1 ffmpeg
4+
RUN apt install -y rubberband-cli python3-dev libsndfile1 ffmpeg libportaudio2 libmpg123-dev
55

66
RUN pip install --upgrade pip
77

88
COPY . /polymath
99
WORKDIR /polymath
1010

1111
RUN pip install -r requirements.txt
12-
13-
RUN mkdir -p input processed separated library
14-

README.md

Lines changed: 89 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,20 @@
1-
21
# Polymath
32

4-
Polymath uses machine learning to convert any music library (*e.g from Hard-Drive or YouTube*) into a music production sample-library. The tool automatically separates songs into stems (*beats, bass, etc.*), quantizes them to the same tempo and beat-grid (*e.g. 120bpm*), analyzes musical structure (*e.g. verse, chorus, etc.*), key (*e.g C4, E3, etc.*) and other infos (*timbre, loudness, etc.*), and converts audio to midi. The result is a searchable sample library that streamlines the workflow for music producers, DJs, and ML audio developers.
3+
Polymath uses machine learning to convert any music library (*e.g from Hard-Drive or YouTube*) into a music production sample-library. The tool automatically separates tracks into stems (_drums, bass, etc._), quantizes them to the same tempo and beat-grid (*e.g. 120bpm*), analyzes tempo, key (_e.g C4, E3, etc._) and other infos (*timbre, loudness, etc.*) and cuts loop out of them. The result is a searchable sample library that streamlines the workflow for music producers, DJs, and ML audio developers.
4+
5+
Try it in colab:
6+
<a target="_blank" href="https://colab.research.google.com/drive/1TjRVFdh1BPdQ_5_PL5EsfS278-EUYt90?usp=sharing">
7+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
8+
</a>
59

6-
<p align="center"><img alt="Polymath" src="https://samim.io/static/upload/illustration3.688a510b-bocuz8wh.png" /></p>
10+
![Polymath](docs/images/polymath.png)
711

812
## Use-cases
9-
Polymath makes it effortless to combine elements from different songs to create unique new compositions: Simply grab a beat from a Funkadelic track, a bassline from a Tito Puente piece, and fitting horns from a Fela Kuti song, and seamlessly integrate them into your DAW in record time. Using Polymath's search capability to discover related tracks, it is a breeze to create a polished, hour-long mash-up DJ set. For ML developers, Polymath simplifies the process of creating a large music dataset, for training generative models, etc.
13+
14+
Polymath makes it effortless to combine elements from different tracks to create unique new compositions: Simply grab a beat from a Funkadelic track, a bassline from a Tito Puente piece, and fitting horns from a Fela Kuti song, and seamlessly integrate them into your DAW in record time. Using Polymath's search capability to discover related tracks, it is a breeze to create a polished, hour-long mash-up DJ set. For ML developers, Polymath simplifies the process of creating a large music dataset, for training generative models, etc.
1015

1116
## How does it work?
17+
1218
- Music Source Separation is performed with the [Demucs](https://github.com/facebookresearch/demucs) neural network
1319
- Music Structure Segmentation/Labeling is performed with the [sf_segmenter](https://github.com/wayne391/sf_segmenter) neural network
1420
- Music Pitch Tracking and Key Detection are performed with [Crepe](https://github.com/marl/crepe) neural network
@@ -22,24 +28,42 @@ Join the Polymath Community on [Discord](https://discord.gg/gaZMZKzScj)
2228

2329
## Requirements
2430

25-
You need to have the following software installed on your system:
31+
**Polymath requires Python version 3.8, 3.9 or 3.10.**
32+
33+
> It is recommended to use a [virtual environment](https://docs.python.org/3/library/venv.html), in order to avoid dependency conflicts. You can use your favorite virtual environment management system, like [conda](https://docs.conda.io/en/latest/), [poetry](https://python-poetry.org/), or [pyenv](https://github.com/pyenv/pyenv) for example.
34+
35+
Furthermore, the following software packages need to be installed in your system:
2636

27-
- ``ffmpeg``
37+
- **Ubuntu**: `sudo apt-get install ffmpeg libsndfile1 libportaudio2 rubberband-cli libmpg123-dev`
38+
- **Mac OS**: `brew install ffmpeg libsndfile portaudio rubberband mpg123`
39+
- **Windows**
40+
41+
> Windows support is currently under development. For the time being, we highly recommend using [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install) and then following the linux instructions.
2842
2943
## Installation
3044

31-
You need python version `>=3.7` and `<=3.10`. From your terminal run:
45+
You need python version `>=3.8` and `<=3.10`. From your terminal run:
46+
3247
```bash
3348
git clone https://github.com/samim23/polymath
3449
cd polymath
3550
pip install -r requirements.txt
3651
```
3752

53+
## Troubleshooting
54+
3855
If you run into an issue with basic-pitch while trying to run Polymath, run this command after your installation:
56+
3957
```bash
4058
pip install git+https://github.com/spotify/basic-pitch.git
4159
```
4260

61+
If you run into an issue with the `madmom` module missing (most likely because you've installed polymath by means of a `requirements.txt` file pointing to this github repo as part of another python app), install it manually:
62+
63+
```bash
64+
pip install git+https://github.com/CPJKU/madmom.git@0551aa8
65+
```
66+
4367
## GPU support
4468

4569
Most of the libraries polymath uses come with native GPU support through cuda. Please follow the steps on https://www.tensorflow.org/install/pip to setup tensorflow for use with cuda. If you have followed these steps, tensorflow and torch will both automatically pick up the GPU and use it. This only applied to native setups, for dockerized deployments (see next section), gpu support is forthcoming
@@ -54,114 +78,114 @@ docker build -t polymath ./
5478

5579
In order to exchange input and output files between your hosts system and the polymath docker container, you need to create the following four directories:
5680

57-
- `./input`
58-
- `./library`
59-
- `./processed`
60-
- `./separated`
81+
- `./polymath_input`
82+
- `./polymath_library`
83+
- `./polymath_output`
6184

6285
Now put any files you want to process with polymath into the `input` folder.
6386
Then you can run polymath through docker by using the `docker run` command and pass any arguments that you would originally pass to the python command, e.g. if you are in a linux OS call:
6487

6588
```bash
6689
docker run \
67-
-v "$(pwd)"/processed:/polymath/processed \
68-
-v "$(pwd)"/separated:/polymath/separated \
69-
-v "$(pwd)"/library:/polymath/library \
70-
-v "$(pwd)"/input:/polymath/input \
71-
polymath python /polymath/polymath.py -a ./input/song1.wav
90+
-v "$(pwd)"/polymath_input:/polymath/polymath_input \
91+
-v "$(pwd)"/polymath_library:/polymath/polymath_library \
92+
-v "$(pwd)"/polymath_output:/polymath/polymath_output \
93+
polymath python /polymath/polymath.py -i ./polymath_input/song1.wav -p -e
7294
```
7395

7496
## Run Polymath
7597

76-
### 1. Add songs to the Polymath Library
98+
To print the help for the python command line arguments:
99+
100+
```bash
101+
python polymath.py -h
102+
```
103+
104+
### 1. Add tracks to the Polymath Library
77105

78106
##### Add YouTube video to library (auto-download)
107+
79108
```bash
80-
python polymath.py -a n6DAqMFe97E
109+
python polymath.py -i n6DAqMFe97E
81110
```
111+
82112
##### Add audio file (wav or mp3)
113+
83114
```bash
84-
python polymath.py -a /path/to/audiolib/song.wav
115+
python polymath.py -i /path/to/audiolib/song.wav
85116
```
117+
86118
##### Add multiple files at once
87-
```bash
88-
python polymath.py -a n6DAqMFe97E,eaPzCHEQExs,RijB8wnJCN0
89-
python polymath.py -a /path/to/audiolib/song1.wav,/path/to/audiolib/song2.wav
90-
python polymath.py -a /path/to/audiolib/
91-
```
92-
Songs are automatically analyzed once which takes some time. Once in the database, they can be access rapidly. The database is stored in the folder "/library/database.p". To reset everything, simply delete it.
93119

94-
### 2. Quantize songs in the Polymath Library
95-
##### Quantize a specific songs in the library to tempo 120 BPM (-q = database audio file ID, -t = tempo in BPM)
96120
```bash
97-
python polymath.py -q n6DAqMFe97E -t 120
121+
python polymath.py -i n6DAqMFe97E,eaPzCHEQExs,RijB8wnJCN0
122+
python polymath.py -i /path/to/audiolib/song1.wav,/path/to/audiolib/song2.wav
123+
python polymath.py -i /path/to/audiolib/
124+
# you can even mix imports:
125+
python polymath.py -i /path/to/audiolib/,n6DAqMFe97E,/path/to/song2.wav
98126
```
99-
##### Quantize all songs in the library to tempo 120 BPM
100-
```bash
101-
python polymath.py -q all -t 120
102-
```
103-
##### Quantize a specific songs in the library to the tempo of the song (-k)
104-
```bash
105-
python polymath.py -q n6DAqMFe97E -k
106-
```
107-
Songs are automatically quantized to the same tempo and beat-grid and saved to the folder “/processed”.
108127

109-
### 3. Search for similar songs in the Polymath Library
110-
##### Search for 10 similar songs based on a specific songs in the library (-s = database audio file ID, -sa = results amount)
128+
Once in the database, they can be searched through, processed and exported. The database is stored by default in the folder "./polymath_library". To change the library folder use the `--library_path` console argument. To reset everything, simply delete that directory.
129+
130+
### 2. Quantize tracks in the Polymath Library
131+
132+
##### Find a specific song in the library and quantize it to tempo 120 BPM (-f = find ID in library, -q = quantize to tempo in BPM)
133+
111134
```bash
112-
python polymath.py -s n6DAqMFe97E -sa 10
135+
python polymath.py -f n6DAqMFe97E -q 120
113136
```
114-
##### Search for similar songs based on a specific songs in the library and quantize all of them to tempo 120 BPM
137+
138+
##### Quantize all tracks in the library to tempo 120 BPM
139+
115140
```bash
116-
python polymath.py -s n6DAqMFe97E -sa 10 -q all -t 120
141+
python polymath.py -q 120
117142
```
118-
##### Include BPM as search criteria (-st)
143+
144+
### 3. Search for specific tracks in the Polymath Library
145+
146+
##### Find tracks with specific search keys in the library and export them
147+
119148
```bash
120-
python polymath.py -s n6DAqMFe97E -sa 10 -q all -t 120 -st -k
149+
python polymath.py -f n6DAqMFe97E,my_song.mp3 -e
121150
```
122-
Similar songs are automatically found and optionally quantized and saved to the folder "/processed". This makes it easy to create for example an hour long mix of songs that perfectly match one after the other.
123151

124-
### 4. Convert Audio to MIDI
125-
##### Convert all processed audio files and stems to MIDI (-m)
152+
The default export directory is `./polymath_output`. To specify a different directory, use the `-o /path/to/my/output/dir` flag.
153+
154+
##### Find tracks in specific BPM range as search criteria (-bmin and -bmax) and also export loops (-fl)
155+
126156
```bash
127-
python polymath.py -a n6DAqMFe97E -q all -t 120 -m
157+
python polymath.py -bmin 80 -bmax 100 -fl -e
128158
```
129-
Generated Midi Files are currently always 120BPM and need to be time adjusted in your DAW. This will be resolved [soon](https://github.com/spotify/basic-pitch/issues/40). The current Audio2Midi model gives mixed results with drums/percussion. This will be resolved with additional audio2midi model options in the future.
130-
131159

132160
## Audio Features
133161

134162
### Extracted Stems
135-
The Demucs Neural Net has settings that can be adjusted in the python file
163+
164+
Stems are extracted with the [nendo stemify plugin](https://github.com/okio-ai/nendo_plugin_stemify_demucs/). Extracted stem types are:
165+
136166
```bash
137167
- bass
138168
- drum
139-
- guitare
140-
- other
141-
- piano
142169
- vocals
170+
- other
143171
```
172+
144173
### Extracted Features
145-
The audio feature extractors have settings that can be adjusted in the python file
174+
175+
Music Information Retrieval features are computed using the [nendo classify plugin](https://github.com/okio-ai/nendo_plugin_classify_core/). Extracted features are:
176+
146177
```bash
147178
- tempo
148179
- duration
149-
- timbre
150-
- timbre_frames
151-
- pitch
152-
- pitch_frames
153180
- intensity
154-
- intensity_frames
155-
- volume
156181
- avg_volume
157182
- loudness
158-
- beats
159-
- segments_boundaries
160-
- segments_labels
161-
- frequency_frames
162183
- frequency
163184
- key
164185
```
165186

166187
## License
188+
167189
Polymath is released under the MIT license as found in the [LICENSE](https://github.com/samim23/polymath/blob/main/LICENSE) file.
190+
191+
As for [nendo core](https://github.com/okio-ai/nendo) and the [plugins used in polymath](#how-does-it-work), see their respective repositories for information about their license.

__init__.py

Whitespace-only changes.

docs/images/polymath.png

881 KB
Loading

0 commit comments

Comments
 (0)