Customized Jupyter environments on Google Cloud

Kaggle docker images come with a huge list of pre-installed packages for machine-learning, including the support of GPU computing. They run within a container as a Jupyter application accessed by users through its web interface. Running a custom image boils down to these steps

  • đź’ˇ pulling the right version from the container registry
  • âť— publishing with appropriate parameters (--runtime flag important for GPU support)

Below we can see how it looks like

(base) maciej.skorski@shared-notebooks:~$ docker pull gcr.io/kaggle-gpu-images/python:v128
v128: Pulling from kaggle-gpu-images/python
d5fd17ec1767: Pulling fs layer 
...
(base) maciej.skorski@shared-notebooks:~$ sudo docker run \
>    --name "/payload-container" \
>    --runtime "nvidia" \
>    --volume "/home/jupyter:/home/jupyter" \
>    --mount type=bind,source=/opt/deeplearning/jupyter/jupyter_notebook_config.py,destination=/opt/jupyter/.jupyter/jupyter_notebook_config.py,readonly \
>    --log-driver "json-file" \
>    --restart "always" \
>    --publish "127.0.0.1:8080:8080/tcp" \
>    --network "bridge" \
>    --expose "8080/tcp" \
>    --label "kaggle-lang"="python" \
>    --detach \
>    --tty \
>    --entrypoint "/entrypoint.sh" \
>    "gcr.io/kaggle-gpu-images/python:v128" \
>    "/run_jupyter.sh" 
cf1b6f63d729d357ef3a320dfab076001a3513c54344be7ae3a5af9789395e63

The following test in Python shell shows that we can indeed use GPU 🙂

root@cf1b6f63d729:/# ipython
Python 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.33.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import torch

In [2]: torch.cuda.is_available()
Out[2]: True

In [3]: torch.Tensor([1,2,3]).to(0)
Out[3]: tensor([1., 2., 3.], device='cuda:0')

Published by mskorski

Scientist, Consultant, Learning Enthusiast

Leave a comment

Your email address will not be published. Required fields are marked *