To develop LightAutoML on GPUs using RAPIDS some prerequisites need to be met:
Anaconda or Miniconda is necessary to install RAPIDS and work with environments.
conda create -n lama_venv python=3.8
conda activate lama_venv
git clone https://github.com/Rishat-skoltech/LightAutoML_GPU.git
cd LightAutoML_GPU
pip install .
pip install catboost==1.0.4
conda install -c rapidsai -c nvidia -c conda-forge rapids=22.02 cudatoolkit=11.0
pip install dask-ml
After you change the library code, you need to re-install the library (pip uninstall lightautoml
). You need to go to LightAutoML directory and then install it once again, calling pip install .
Please note, if you use NVIDIA GPU Ampere architecture (i.e. Tesla A100 or RTX3000 series), you may need to uninstall pytorch and install it manually due to compatibility issues. To do so, run following commands:
pip uninstall torch torchvision
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 -f https://download.pytorch.org/whl/torch_stable.html
Once the RAPIDS is installed, the environment is fully ready. You can activate it using the source
command and test and implement your own code.