fixed references

This commit is contained in:
Christoph Schranz 2020-06-22 10:12:46 +02:00
parent 6a9ff52bea
commit da9c98689e
5 changed files with 10 additions and 46 deletions

View File

@ -425,6 +425,7 @@ RUN pip install --no-cache-dir jupyter-tabnine==1.0.2 && \
RUN fix-permissions $CONDA_DIR
RUN conda install -c conda-forge jupyter_contrib_nbextensions && \
conda install -c conda-forge jupyter_nbextensions_configurator && \
conda install -c conda-forge rise && \
jupyter nbextension enable codefolding/main
RUN jupyter labextension install @ijmbarr/jupyterlab_spellchecker

1
.gitignore vendored
View File

@ -113,3 +113,4 @@ venv.bak/
# Added config to hide hash of changed password
src/jupyter_notebook_config.json
.idea
/Deployment-notes.md

View File

@ -1,38 +0,0 @@
# Deployment Notes
## Push image with tag to Dockerhub
Based on [this](https://ropenscilabs.github.io/r-docker-tutorial/04-Dockerhub.html) tutorial
with the tag `v1.0_cuda-10.1_ubuntu-18.04`:
```bash
# on il048:
cd ~/Documents/projects/GPU-Jupyter/gpu-jupyter
git pull
bash generate_Dockerfile.sh
bash start-local -p 1234
docker image ls
docker tag [IMAGE ID] cschranz/gpu-jupyter:v1.0_cuda-10.1_ubuntu-18.04
docker push cschranz/gpu-jupyter:v1.0_cuda-10.1_ubuntu-18.04
docker save cschranz/gpu-jupyter > ../gpu-jupyter_tag-v1.0_cuda-10.1_ubuntu-18.04.tar
```
Then, the new tag is available on [Dockerhub](https://hub.docker.com/repository/docker/cschranz/gpu-jupyter/tags).
## Deployment in the swarm
The GPU-Jupyter instance for deployment, that has swarm files and changed pw is
in `/home/iotdev/Documents/projects/dtz/src/gpu-jupyter`
```bash
# on il048:
cd /home/iotdev/Documents/projects/dtz/src/gpu-jupyter
git pull
bash generate_Dockerfile.sh
bash add-to-swarm-with-defaults.sh
```
Then, the service will be available with data stored in `data`
on [192.168.48.48:8848](http://192.168.48.48:8848) with our password.

View File

@ -44,7 +44,7 @@ directly on the host node), you can run these commands to start the jupyter note
docker-compose (internally):
```bash
./generate_Dockerfile.sh
./generate-Dockerfile.sh
docker build -t gpu-jupyter .build/
docker run -d -p [port]:8888 gpu-jupyter
```
@ -53,7 +53,7 @@ Alternatively, you can configure the environment in `docker-compose.yml` and run
this to deploy the `GPU-Jupyter` via docker-compose (under-the-hood):
```bash
./generate_Dockerfile.sh
./generate-Dockerfile.sh
./start-local.sh -p 8888 # where -p stands for the port of the service
```
@ -133,7 +133,7 @@ networks:
Finally, *GPU-Jupyter* can be deployed in the Docker Swarm with the shared network, using:
```bash
./generate_Dockerfile.sh
./generate-Dockerfile.sh
./add-to-swarm.sh -p [port] -n [docker-network] -r [registry-port]
# e.g. ./add-to-swarm.sh -p 8848 -n elk_datastack -r 5001
```
@ -190,7 +190,7 @@ and in the `Dockerfile.pytorch` the line:
Then re-generate and re-run the image, as closer described above:
```bash
./generate_Dockerfile.sh
./generate-Dockerfile.sh
./start-local.sh -p [port]:8888
```
@ -201,13 +201,13 @@ submodule within `.build/docker-stacks`. Per default, the head of the commit is
To update the generated Dockerfile to a specific commit, run:
```bash
./generate_Dockerfile.sh --commit c1c32938438151c7e2a22b5aa338caba2ec01da2
./generate-Dockerfile.sh --commit c1c32938438151c7e2a22b5aa338caba2ec01da2
```
To update the generated Dockerfile to the latest commit, run:
```bash
./generate_Dockerfile.sh --commit latest
./generate-Dockerfile.sh --commit latest
```
A new build can last some time and may consume a lot of data traffic. Note, that the latest version may result in

View File

@ -103,5 +103,5 @@ echo "COPY jupyter_notebook_config.json /etc/jupyter/" >> $DOCKERFILE
#cp $(find $(dirname $DOCKERFILE) -type f | grep -v $STACKS_DIR | grep -v .gitkeep) .
echo "GPU Dockerfile was generated sucessfully in file ${DOCKERFILE}."
echo "Run 'bash run_Dockerfile.sh -p [PORT]' to start the GPU-based Juyterlab instance."
echo "GPU Dockerfile was generated successfully in file ${DOCKERFILE}."
echo "Run 'bash start-local.sh -p [PORT]' to start the GPU-based Juyterlab instance."