2019-11-12 20:10:25 +00:00
# gpu-jupyter
2019-11-14 11:04:45 +00:00
#### Leverage the power of Jupyter and use your NVIDEA GPU and use Tensorflow and Pytorch in collaborative notebooks.
2019-11-14 11:14:20 +00:00
![Jupyterlab Overview ](/extra/jupyterlab-overview.png )
2019-11-14 11:04:45 +00:00
## Contents
1. [Requirements ](#requirements )
2. [Quickstart ](#quickstart )
3. [Deployment ](#deployment-in-the-docker-swarm )
3. [Configuration ](#configuration )
4. [Trouble-Shooting ](#trouble-shooting )
## Requirements
1. Install [Docker ](https://www.docker.com/community-edition#/download ) version **1.10.0+**
2. Install [Docker Compose ](https://docs.docker.com/compose/install/ ) version **1.6.0+**
3. Get access to use your GPU via the CUDA drivers, see this [blog-post ](https://medium.com/@christoph.schranz )
4. Clone this repository
```bash
git clone https://github.com/iot-salzburg/gpu-jupyter.git
cd gpu-jupyter
```
## Quickstart
As soon as you have access to your GPU locally (it can be tested via a Tensorflow or PyTorch), you can run these commands to start the jupyter notebook via docker-compose:
```bash
./start-local.sh
```
2019-11-14 11:14:20 +00:00
This will run jupyter on the default port [localhost:8888 ](http://localhost:8888 ). The general usage is:
2019-11-14 11:04:45 +00:00
```bash
./start-local.sh -p [port] # port must be an integer with 4 or more digits.
```
In order to stop the local deployment, run:
```bash
./stop-local.sh
```
2019-11-14 11:14:20 +00:00
## Deployment in the Docker Swarm
2019-11-14 11:04:45 +00:00