Go to file
2019-11-14 12:04:45 +01:00
data Add gitkeep for dir data 2019-11-12 22:26:41 +01:00
src Initial commit, local mode runs 2019-11-12 22:22:20 +01:00
.gitignore Automated scripts for deployment 2019-11-13 15:45:07 +01:00
add-to-swarm.sh Automated scripts for deployment 2019-11-13 15:45:07 +01:00
docker-compose-swarm.yml Automated scripts for deployment 2019-11-13 15:45:07 +01:00
docker-compose.yml Automated scripts for deployment 2019-11-13 15:45:07 +01:00
LICENSE Initial commit 2019-11-12 21:10:25 +01:00
README.md structure and quickstart 2019-11-14 12:04:45 +01:00
remove-from-swarm.sh Automated scripts for deployment 2019-11-13 15:45:07 +01:00
start-local.sh removed invalid network parameter 2019-11-14 11:56:43 +01:00
stop-local.sh Automated scripts for deployment 2019-11-13 15:45:07 +01:00

gpu-jupyter

Leverage the power of Jupyter and use your NVIDEA GPU and use Tensorflow and Pytorch in collaborative notebooks.

Contents

  1. Requirements
  2. Quickstart
  3. Deployment
  4. Configuration
  5. Trouble-Shooting

Requirements

  1. Install Docker version 1.10.0+

  2. Install Docker Compose version 1.6.0+

  3. Get access to use your GPU via the CUDA drivers, see this blog-post

  4. Clone this repository

    git clone https://github.com/iot-salzburg/gpu-jupyter.git
    cd gpu-jupyter
    

Quickstart

As soon as you have access to your GPU locally (it can be tested via a Tensorflow or PyTorch), you can run these commands to start the jupyter notebook via docker-compose:

./start-local.sh

This will run jupyter on the default port localhost:8888. The general usage is:

./start-local.sh -p [port]  # port must be an integer with 4 or more digits.

In order to stop the local deployment, run:

./stop-local.sh