- MATE represents Manycore computing for Acceleration of Tensor Execution.
- 🐍 Anaconda is recommended to use and develop MATE.
- 🐧 Linux distros are tested and recommended to use and develop MATE.
First, clone the recent version of this repository.
git clone https://github.com/cxinsys/mate
Now, we need to install MATE as a module.
cd mate
pip install -e .
- Default backend framework of the 'MATE' class is PyTorch.
- [recommended] To use PyTorch Lightning framework, you need to use a another class called 'MATELightning' (see MATELightning class)
MATE supports several optional backend frameworks such as CuPy and JAX.
To use optional frameworks, you need to install the framework manually
Install Cupy from Conda-Forge with cudatoolkit supported by your driver
conda install -c conda-forge cupy cuda-version=xx.x (check your CUDA version)
Install JAX with CUDA > 12.x
pip install -U "jax[cuda12]"
Install TensorFlow-GPU with CUDA
python3 -m pip install tensorflow[and-cuda]
import mate
worker = mate.MATE()
- arr: numpy array for transfer entropy calculation, required
- pair: numpy array for calculation pairs, optional, default: compute possible pairs from all nodes in the arr
- backend: optional, default: 'cpu'
- device_ids: optional, default: [0] (cpu), [list of whole gpu devices] (gpu)
- procs_per_device: The number of processes to create per device when using non 'cpu' devices, optional, default: 1
- batch_size: required
- kp: kernel percentile, optional, default: 0.5
- df: history length, optional, default: 1
result_matrix = worker.run(arr=arr,
pairs=pairs,
backend=backend,
device_ids=device_ids,
procs_per_device=procs_per_device,
batch_size=batch_size,
kp=kp,
dt=dt,
)
- arr: numpy array for transfer entropy calculation, required
- pair: numpy array for calculation pairs, optional, default: compute possible pairs from all nodes in the arr
- kp: kernel percentile, optional, default: 0.5
- len_time: total length of expression array, optional, default: column length of array
- dt: history length of expression array, optional, default: 1
import mate
worker = mate.MATELightning(arr=arr,
pairs=pairs,
kp=kp,
len_time=len_time,
dt=dt)
MATELightning's run function parameters take the same values as PyTorch's DataLoader and PyTorch Lightning's Trainer. For additional options for parameters, see those documents
- backend: required, 'gpu', 'cpu' or 'tpu' etc
- devices: required, int or [list of device id]
- batch_size: required
- num_workers: optional, default: 0
result_matrix = worker.run(backend=backend,
devices=devices,
batch_size=batch_size,
num_worker=num_worker)
- add 'jax' backend module
- implement 'pytorch lightning' backend module
- add 'tensorflow' backend module