I work with nsq_to_file utility while running some automation code, I wanted to automate that utility as a docker-compose service. I can't find any documentation about using this utility with docker. I use it as follows:
./nsq_to_file --lookupd-http-address=<http_address> --topic=ta-gcp-test -output-dir=/path/to/local/dir -filename-format=local_file_name
Does anyone have any input on that?
You can build a Docker container with the nsq_to_file executable, it would look like this:
#
# build container
#
FROM golang:1.17-alpine as builder
RUN apk update && apk add git
RUN git clone https://github.com/nsqio/nsq
RUN cd nsq/apps/nsq_to_file/ && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /nsq_to_file .
#
# scratch release container
#
FROM scratch as scratch
COPY --from=builder /nsq_to_file /nsq_to_file
COPY --from=builder /etc/ssl/certs /etc/ssl/certs
# Run as non-root user for secure environments
USER 59000:59000
ENTRYPOINT [ "/nsq_to_file" ]
You can then build it and run it:
docker build -t oliver006/nsq_to_file -f Dockerfile .
docker run --rm oliver006/nsq_to_file --lookupd-http-address=<http_address> --topic=ta-gcp-test ...
Related
I downloaded a python script and a docker image containing commands to install all the dependencies. How can I run the python script using the docker image?
Copy python file in Docker image then execute -
docker run image-name PATH-OF-SCRIPT-IN-IMAGE/script.py
Or you can also build the DockerFile by using the RUN python PATH-OF-SCRIPT-IN-IMAGE/script.py inside DockerFile.
How to copy container to host
docker cp <containerId>:/file/path/within/container /host/path/target
How to copy host to the container
docker cp /host/local/path/file <containerId>:/file/path/in/container/file
Run in interactive mode:
docker run -it image_name python filename.py
or if you want host and port to be specified:
docker run -it -v filename.py:filename.py -p 8888:8888 image_name python filename.py
Answer
First, Copy your python script and other required files to your docker container.
docker cp /path_to_file <containerId>:/path_where_you_want_to_save
Second, open the container cli using docker desktop and run your python script.
The best way, I think, is to make your own image that contains the dependencies and the script.
When you say you've been given an image, I'm guessing that you've been given a Dockerfile, since you talk about it containing commands.
Place the Dockerfile and the script in the same directory. Add the following lines to the bottom of the Dockerfile.
# Existing part of Dockerfile goes here
COPY my-script.py .
CMD ["python", "my-script.py"]
Replace my-script.py with the name of the script.
Then build and run it with these commands
docker build -t my-image .
docker run my-image
I have created a Docker image with dockerfile where the Entrypoint is as follows:
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python", "./myprojectmain.py", "--config", "./config.py"]
When I run I use the command:
docker run myproject
all is fine it seems.
However I have a secondary .py file in the root of the project called setup.py. The purpose of this file is to update some of the config and json files after getting some input from the user.
Is there a way to run this secondary file (setup.py) or do I need to create a whole new image (which seems ridiculous).
Thanks
Well... if you got an image, you don't have to use entrypoint... just run your scripts like this:
docker run image "python /some/path/myscript.py"
or
docker run image /bin/bash -c "cd /some/path && python myscript.py"
or with entry point
RUN ./myprojectmain.py --config ./config.py
RUN ./myproject2main.py --config ./config.py
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python"]
You can straightforwardly provide an alternate command after the image name in the docker run command. It's harder to override the entrypoint, though. If you have both a command and an entrypoint then they are combined together into a single command.
This workflow is easiest if your Dockerfile has a CMD, and that's a complete runnable shell command. If you have an ENTRYPOINT at all, it is some kind of wrapper that does some initial setup and then runs the command it's given as additional arguments. In this particular setup, conda run with its arguments seems to meet that need and have the correct form, so you could say
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "--"]
CMD ["python", "./myprojectmain.py", "--config", "./config.py"]
(Note that conda run seems to have some issues; you could probably simulate it using a custom entrypoint wrapper script or use a pip-based non-virtual-environment workflow instead.)
If you split the ENTRYPOINT and CMD like this, then you can run
docker run myproject \
python setup.py
The alternate python setup.py command will be appended to the conda run entrypoint command.
... update some of the config and json files ...
It's often a good idea to inject these into your container using a bind mount. Depending on how exactly the files get set up, you may be able to initialize them from the host environment, without Docker
./setup.py
docker run -d -v $PWD/config:/app/config myproject
but if they are sensitive to the Docker environment in some way, you could do it in Docker too; make sure to mount the same configuration storage into both containers.
docker network create mynet
docker volume create config
docker run --rm --net mynet -v config:/app/config myproject ./setup.py
docker run -d -p 8000:8000 --net mynet -v config:/app/config myproject
I have a question regarding the whole data volume process in Docker. Basically here are two Dockerfiles and their respective run commands:
Dockerfile 1 -
# Transmission over Debian
#
# Version 2.92
FROM debian:testing
RUN apt-get update \
&& apt-get -y install nano \
&& apt-get -y install transmission-daemon transmission-common transmission-cli \
&& mkdir -p /transmission/config /transmission/watch /transmission/download
ENTRYPOINT ["transmission-daemon", "--foreground"]
CMD ["--config-dir", "/transmission/config", "--watch-dir", "/transmission/watch", "--download-dir", "/transmission/download", "--allowed", "*", "--no-blocklist", "--no-auth", "--no-dht", "--no-lpd", "--encryption-preferred"]
Command 1 -
docker run --name transmission -d -p 9091:9091 -v C:\path\to\config:/transmission/config -v C:\path\to\watch:/transmission/watch -v C:\path\to\download:/transmission/download transmission
Dockerfile 2 -
# Nginx over Debian
#
# Version 1.10.3
FROM debian:testing
RUN apt-get update \
&& apt-get -y install nano \
&& apt-get -y install nginx
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Command 2 -
docker run --name nginx -d -p 80:80 -v C:\path\to\config:/etc/nginx -v C:\path\to\html:/var/www/html nginx
So, the weird thing is that the first dockerfile and command works as intended. Where the docker daemon mounts a directory from the container to the host. So, I am able to edit the configuration files as I please and they will be persisted to the container on a restart.
However, as for the second dockerfile and command it doesn't seem to be working. I know if you go to the Docker Volume documentation it says that volume mounts are only intended to go one-way, from host-to-container, but how come the Transmission container works as intended, while the Nginx container doesn't?
P:S - I'm running Microsoft Windows 10 Pro Build 14393 as my host and Version 17.03.0-ce-win1 (10300) Channel: beta as my Docker version.
Edit - Just to clarify. I'm trying to get the files from inside the Nginx container to the host. The first container (Transmission) works in that regard, by using a data volume. However, for the second container (Nginx), it doesn't want to copy the files in the mounted directory from inside the container to the host. Everything else is working though, it does successfully start.
The host volume will not copy data like a named volume will. However, you can create a named volume that performs a bind mount, which will then have the data initialization properties of any other named volume. The only prerequisite of a bind mount over a host volume is that the directory must exist in advance, docker will not create it for you like it does with a host volume. Here are three different examples of how to create a bind mount volume:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
So in your example with a docker run command, you can use the mount syntax:
docker run --name nginx -d -p 80:80 \
--mount type=volume,dst=/etc/nginx,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/c/path/to/config \
--mount type=volume,dst=/var/www/html,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/c/path/to/html \
nginx
The only part that may need adjusting is the windows path names inside the Linux VM that docker runs in HyperV.
Host volumes don't copy data from the container > host. Host volumes mount over the top of what's in the container/image, so they effectively replace what's in the container with what's on the host.
A standard or "named" volume will copy the existing data from the container image into a new volume. These volumes are created by launching a container with the VOLUME command in it's Dockerfile or by the docker command
docker run -v myvolume:/var/whatever myimage
By default this is data stored in a "local" volume, "local" being on the Docker host. In your case that is on the VM running Docker rather than your Windows host so might not be easily accessible to you.
You could be mistaking transmission auto generating files in a blank directory for a copy?
If you really need the keep the VM Host > container mappings then you might have to copy the data manually:
docker create --name nginxcopy nginx
docker cp nginxcopy:/etc/nginx C:\path\to\config
docker cp nginxcopy:/var/www/html C:\path\to\html
docker rm nginxcopy
And then you can map the populated host directories into the container and they will have the default data the image came with.
I'm looking into moving some of our web servers to docker containers. The jwilder/nginx-proxy image looks interesting, and seems to do what we want, but how would one properly deploy a flask application in a container, and have it work with the jwilder/nginx-proxy server? To be clear, the flask application would also be running in a docker container.
In a separate, but related question, how would one do this for a django app?
It looks like there's a popular tiangolo/uwsgi-nginx-flask image, and a similar dockerfiles/django-uwsgi-nginx image. In this setup, from what I understand, the nginx-proxy container would direct traffic to the uwsgi-nginx-flask or django-uwsgi-nginx container. Is this a common way to do this?
The main thought I had was that in such a setup, we're running extra instances of nginx - one for every python/django app. Is this common? Or is it possible/beneficial/common to somehow have the nginx-proxy talk directly to uwsgi within the python app container?
I see that the nginx-proxy image has a VIRTUAL_PROTO=uwsgi option that other containers can be started with. Is this something that can be used to make things more efficient? Or is it more effort than it's worth?
Edit: Or is the nginx instance that accompanies the flask/django project beneficial, since it can be used to serve static content, without which, you would need to configure the nginx-proxy image with the location of every project's static files?
Personally, I prefer to have Django have one container, NGINX in separate container, other applications in other containers etc. For that I prefer to use docker-compose. You can checkout my implementation about using Django + NGINX + PostgreSQL in here.(I have not used jwilder/nginx-proxy, instead I have used official NGINX docker image)
But putting NGINX and Python server in same container does not sound that bad. I have used a lightweight alpine based images for deploying python, for example:
FROM nginx:mainline-alpine
# --- Python Installation ---
RUN apk add --no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
# --- Work Directory ---
WORKDIR /usr/src/app
# --- Python Setup ---
ADD . .
RUN pip install -r app/requirements.pip
# --- Nginx Setup ---
COPY config/nginx/default.conf /etc/nginx/conf.d/
RUN chmod g+rwx /var/cache/nginx /var/run /var/log/nginx
RUN chgrp -R root /var/cache/nginx
RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf
RUN addgroup nginx root
# --- Expose and CMD ---
EXPOSE 5000
CMD gunicorn --bind 0.0.0.0:5000 wsgi --chdir /usr/src/app/app & nginx -g "daemon off;"
Although it looks bit messy, but it works fine. Please checkout my full implementation at here.
Depending on how you want to deploy docker images, you can use either approaches. But using docker compose would be the best solution IMHO. And in both setups, you can use NGINX to serve your static contents(no need to configure it for each static file).
I have a Docker file trying to deploy Django code to a container
FROM ubuntu:latest
MAINTAINER { myname }
#RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sou$
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y tar git curl dialog wget net-tools nano buil$
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python python-dev python-distribute python-p$
RUN mkdir /opt/app
WORKDIR /opt/app
#Pull Code
RUN git clone git#bitbucket.org/{user}/{repo}
RUN pip install -r website/requirements.txt
#EXPOSE = ["8000"]
CMD python website/manage.py runserver 0.0.0.0:8000
And then I build my code as docker build -t dockerhubaccount/demo:v1 ., and this pulls my code from Bitbucket to the container. I run it as docker run -p 8000:8080 -td felixcheruiyot/demo:v1 and things appear to work fine.
Now I want to update the code i.e since I used git clone ..., I have this confusion:
How can I update my code when I have new commits and upon Docker containers build it ships with the new code (note: when I run build it does not fetch it because of cache).
What is the best workflow for this kind of approach?
There are a couple of approaches you can use.
You can use docker build --no-cache to avoid using the cache of the Git clone.
The startup command calls git pull. So instead of running python manage.py, you'd have something like CMD cd /repo && git pull && python manage.py or use a start script if things are more complex.
I tend to prefer 2. You can also run a cron job to update the code in your container, but that's a little more work and goes somewhat against the Docker philosophy.
I would recommend you checkout out the code on your host and COPY it into the image. That way it will be updated whenever you make a change. Also, during development you can bind mount the source directory over the code directory in the container, meaning any changes are reflected immediately in the container.
A docker command for git repositories that checks for the last update would be very useful though!
Another solution.
Docker build command uses cache as long as a instruction string is exactly same as the one of cached image. So, if you write
RUN echo '2014122400' >/dev/null && git pull ...
On next update, you change as follows.
RUN echo '2014122501' >/dev/null && git pull ...
This can prevents docker from using cache.
I would like to offer another possible solution. I need to warn however that it's definitely not the "docker way" of doing things and relies on the existence of volumes (which could be a potential blocker in tools like Docker Swarm and Kubernetes)
The basic principle that we will be taking advantage of is the fact that the contents of container directories that are used as Docker Volumes, are actually stored in the file system of the host. Check out this part of the documentation.
In your case you would make /opt/app a Docker Volume. You don't need to map the Volume explicitly to a location on the host's file-system since as a I will describe below, the mapping can be obtained dynamically.
So for starters leave your Dockerfile exactly as it is and switch your container creation command to something like:
docker run -p 8000:8080 -v /opt/app --name some-name -td felixcheruiyot/demo:v1
The command docker inspect -f {{index .Volumes "/opt/webapp"}} some-name will print the full file system path on the host where your code is stored (this is where I picked up the inspect trick).
Armed with that knowledge all you have to do is replace that code and your all set.
So a very simple deploy script would be something like:
code_path=$(docker inspect -f {{index .Volumes "/opt/webapp"}} some-name)
rm -rfv $code_path/*
cd $code_path
git clone git#bitbucket.org/{user}/{repo}
The benefits you get with an approach like this are:
There are no potentially costly cacheless image rebuilds
There is no need to move application specific running information into the run command. The Dockerfile is the only source of needed for instrumenting the application
UPDATE
You can achieve the same results I have mentioned above using docker cp (starting Docker 1.8). This way the container need not have volumes, and you can replace code in the container as you would on the host file-system.
Of course as I mentioned in the beginning of the answer, this is not the "docker way" of doing things, which advocates containers being immutable and reproducible.
If you use GitHub you can use the GitHub API to not cache specific RUN commands.
You need to have jq installed to parse JSON: apt-get install -y jq
Example:
docker build --build-arg SHA=$(curl -s 'https://api.github.com/repos/Tencent/mars/commits' | jq -r '.[0].sha') -t imageName .
In Dockerfile (ARG command should be right before RUN):
ARG SHA=LATEST
RUN SHA=${SHA} \
git clone https://github.com/Tencent/mars.git
Or if you don't want to install jq:
SHA=$(curl -s 'https://api.github.com/repos/Tencent/mars/commits' | grep sha | head -1)
If a repository has new commits, git clone will be executed.