I've tried using the std.flush() to flush the print statements and importing logging library and using logger.info(). Neither worked.
I'm dealing with legacy code that adds a logger.info() statement to print to the host console, but when I try to add more, they don't print.
I am running the project using a Docker file to build the image, the Dockerfile copies three .py files needed to run the application.
I use this run command:
docker run -it -p 13801:13800 --net=kv_subnet --ip=10.10.0.4 --name="node1" -e ADDRESS="10.10.0.4:13800" -e VIEW="10.10.0.4:13800" kvs
Related
I’m new to working on Linux. I apologize if this is a dumb question. Despite searching for more than a week, I was not able to derive a clear answer to my question.
I’m running a very long Python program on Nvidia CPUs. The output is several csv files. It takes long to compute the output, so I use nohup to be able to exit the process.
Let’s say main.py file is this
import numpy as p
import pandas as pd
if __name__ == ‘__main__’:
a = np.arange(1,1000)
data = a*2
filename = ‘results.csv’
output = pd.DataFrame(data, columns = [“Output”])
output.to_csv(filename)
The calculations for data is more complicated, of course. I build a docker container, and run this program inside this container. When I use python main.py for a smaller-sized example, there is no problem. It writes the csv files.
My question is this:
When I do nohup python main.py &, I check what’s going on with tail -f nohup.out in the docker container, I get what it is doing at that time but I cannot exit it and let the execution run its course. It just stops there. How can I exit safely from the screen that comes with tail -f nohup.out?
I tried not checking the condition of the code and letting the code continue for two days, then I returned. The output of tail -f nohup.out indicated that the execution finished but csv files were nowhere to be seen. It is somehow bundled up inside nohup.out or does it indicate something else is wrong?
If you're going to run this setup in a Docker container:
A Docker container runs only one process, as a foreground process; when that process exits the container completes. That process is almost always the script or server you're trying to run and not an interactive shell. But;
It's possible to use Docker constructs to run the container itself in the background, and collect its logs while it's running or after it completes.
A typical Dockerfile for a Python program like this might look like:
FROM python:3.10
# Create and use some directory; it can be anything, but do
# create _some_ directory.
WORKDIR /app
# Install Python dependencies as a separate step. Doing this first
# saves time if you repeat `docker build` without changing the
# requirements list.
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy in the rest of the application.
COPY . .
# Set the main container command to be the script.
CMD ["./main.py"]
The script should be executable (chmod +x main.py on your host) and begin with a "shebang" line (#!/usr/bin/env python3) so the system knows where to find the interpreter.
You will hear recommendations to use both CMD and ENTRYPOINT for the final line. It doesn't matter much to your immediate question. I prefer CMD for two reasons: it's easier to launch an alternate command to debug your container (docker run --rm your-image ls -l vs. docker run --rm --entrypoint ls your-image -l), and there's a very useful pattern of using ENTRYPOINT to do some initial setup (creating environment variables dynamically, running database migrations, ...) and then launching CMD.
Having built the image, you can use the docker run -d option to launch it in the background, and then run docker logs to see what comes out of it.
# Build the image.
docker build -t long-python-program .
# Run it, in the background.
docker run -d --name run1 long-python-program
# Review its logs.
docker logs run1
If you're running this to produce files that need to be read back from the host, you need to mount a host directory into your container at the time you start it. You need to make a couple of changes to do this successfully.
In your code, you need to write the results somewhere different than your application code. You can't mount a host directory over the /app directory since it will hide the code you're actually trying to run.
data_dir = os.getenv('DATA_DIR', 'data')
filename = os.path.join(data_dir, 'results.csv')
Optionally, in your Dockerfile, create this directory and set a pointer to it. Since my sample code gets its location from an environment variable you can again use any path you want.
# Create the data directory.
RUN mkdir /data
ENV DATA_DIR=/data
When you launch the container, the docker run -v option mounts filesystems into the container. For this sort of output file you're looking for a bind mount that directly attaches a host directory to the container.
docker run -d --name run2 \
-v "$PWD/results:/data" \
long-python-program
In this example so far we haven't set the USER of the program, and it will run as root. You can change the Dockerfile to set up an alternate USER (which is good practice); you do not need to chown anything except the data directory to be owned by that user (leaving your code owned by root and not world-writeable is also good practice). If you do this, when you launch the container (on native Linux) you need to provide the host numeric user ID that can write to the host directory; you do not need to make other changes in the Dockerfile.
docker run -d --name run2 \
-u $(id -u) \
-v "$PWD/results:/data" \
long-python-program
1- Container is a foreground process. Use CMD or Entrypoint in Dockerfile.
2- Map volume in docker to linux directory's.
I have a Docker image that is actually a server for a device. It is started from a Python script, and I made .sh to run it. However, whenever I run it, it says that it is executed and it ends (server exited with code 0). The only way I made it work is via docker-compose when I run it as detached container, then enter the container via bin/bash and execute the run script (beforementioned .sh) from it manually, then exit the container.
After that everything works as intended, but the issue arises when the server is rebooted. I have to do it manually all over again.
Did anyone else experience anything similar? If yes how can I fix this?
File that starts server (start.sh):
#!/bin/sh
python source/server/main.pyc &
python source/server/main_socket.pyc &
python source/server/main_monitor_server.pyc &
python source/server/main_status_server.pyc &
python source/server/main_events_server.pyc &
Dockerfile:
FROM ubuntu:trusty
RUN mkdir -p /home/server
COPY server /home/server/
EXPOSE 8854
CMD [ /home/server/start.sh ]
Docker Compose:
version: "3.9"
services:
server:
tty: yes
image: deviceserver:latest
container_name: server
restart: always
ports:
- "8854:8854"
deploy:
resources:
limits:
memory: 3072M
It's not a problem with docker-compose. Your docker container should not return (i.e block) even when launched with a simple docker run.
For that your CMD should run in the foreground.
I think the issue is that you're start.sh returns instead of blocking. Have you tried to remove the last '&' from your script (I'm not familiar with python and what these different processes are)?
In general you should run only one process per container. If you have five separate processes you need to run, you would typically run five separate containers.
The corollaries to this are that the main container command should be a foreground process; but also that you can run multiple containers off of the same image with different commands. In Compose you can override the command: separately for each container. So, for example, you can specify:
version: '3.8'
services:
main:
image: deviceserver:latest
command: ./main.py
socket:
image: deviceserver:latest
command: ./main_socket.py
et: cetera
If you're trying to copy-and-paste this exact docker-compose.yml file, make sure to set a WORKDIR in the Dockerfile so that the scripts are in the current directory, make sure the scripts are executable (chmod +x in your source repository), and make sure they start with a "shebang" line #!/usr/bin/env python3. You shouldn't need to explicitly say python anywhere.
FROM python:3.9 # not a bare Ubuntu image
WORKDIR /home/server # creates the directory too
COPY server ./ # don't need to duplicate the directory name here
RUN pip install -r requirements.txt
EXPOSE 8854 # optional, does almost nothing
CMD ["./main.py"] # valid JSON-array syntax; can be overridden
There are two major issues in the setup you show. The CMD is not a syntactically valid JSON array (the command itself is not "quoted") and so Docker will run it as a shell command; [ is an alias for test(1) and will exit immediately. If you do successfully run the script, the script launches a bunch of background processes and then exits, but since the script is the main container command, that will cause the container to exit as well. Running a set of single-process containers is generally easier to manage and scale than trying to squeeze multiple processes into a single container.
You can add sleep command in the end of your start.sh.
#!/bin/sh
python source/server/main.pyc &
python source/server/main_socket.pyc &
python source/server/main_monitor_server.pyc &
python source/server/main_status_server.pyc &
python source/server/main_events_server.pyc &
while true
do
sleep 1;
done
(I'm fairly new to Docker here.) I am trying to Dockerize a Scrapy application. As a first step, I'm trying to start a project in the container - which creates and populates a directory structure - and attach a volume to the project directory for editing purposes.
First I need to call scrapy startproject myScraper; then I'd like to be calling custom commands like scrapy shell or scrapy crawl myCrawler on the container to run webcrawls.
Since all Scrapy commands begin by calling scrapy, I wrote this Dockerfile:
FROM python:3
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
ENTRYPOINT scrapy #or so I thought was right ...
where requirements.txt is just Scrapy. Now I have a couple of problems. First is that the ENTRYPOINT does not seem to work - specifically, when I run
docker build -t scraper .
docker run -it -v $PWD:/scraper --name Scraper scraper [SOME-COMMAND]
I just get back the scrapy usage help menu. (For example, if SOME-COMMAND is shell or startproject scraper.) I've tried a few variations with no success. Second, if the container stops, I'm not sure how to start it again (e.g., I can't pass a command to docker start -ai Scraper).
Part of the reason I'm trying to do these commands here, rather than as RUN and VOLUME in the Dockerfile, is that if the volume is created during the build process, it obscures the project directory rather than copying its contents from container to host volume. (That is, I get a copy of my empty host directory in the container instead of the populated directory set up by scrapy startproject volumeDirectory.)
I've looked my issue up and know I may be off-track with proper Docker, but it really feels like what I'm asking should be possible here.
My recommendation would be to delete the ENTRYPOINT line; you can make it the default CMD if you'd like. Then you can run
docker run -it -v $PWD:/scraper --name Scraper scraper scrapy ...
Your actual problem here is that if you use ENTRYPOINT (or CMD or RUN) with a bare string like you show, it gets automatically wrapped in sh -c. Then the command you pass on the command line is combined with the entrypoint and what you ultimately get as the main container command is
/bin/sh -c 'scrapy' '[some-command]'
and so the shell runs scrapy with no arguments, but if that string happened to contain $1 or similar positional parameters, they could get filled in from command parameters.
If you add explicit JSON-array syntax then Docker won't add the sh -c wrapper and your proposed syntax will work
ENTRYPOINT ["scrapy"]
but a number of other common tasks won't work. For example, you can't easily get a debugging shell
# Runs "scrapy /bin/bash"
docker run --rm -it scraper /bin/bash
and using --entrypoint to overwrite things results in awkward command lines
docker run --rm --entrypoint /bin/ls scraper -lrt /scraper
I'm a total newbie to docker, and am having trouble on how to approach this problem. Consider this super simplified cli tool that produces a log when ran with docker run.
import click
import logging
logging.basicConfig(filename='log.log')
logger = logging.getLogger(__name__)
#click.group()
#click.version_option('1.0')
def cli():
'''docker_cli with docker test'''
#cli.command('run')
#click.argument('name', default='my name')
def run(name):
logger.info("running 'run' within docker")
print('Hello {}'.format(name))
And my dockerfile is as follows:
FROM python:3.5-slim
LABEL maintainer="Boudewijn Aasman"
LABEL version="1.0"
ENV config production
RUN mkdir /docker_cli
COPY docker_cli ./docker_cli
COPY setup.py .
RUN python setup.py install
CMD ["cli", "run"]
If I execute the container using:
docker run cli_test cli run world
how do I retrieve the log file that gets created during the process? The container exits immediately after the command print out 'Hello world'. My assumption is using a volume, but not sure how to make it work.
You can either share a local directory:
docker run -v full-path-your-local-dir:. cli_test cli run world
Or create a docker volume
docker volume create cli_test_volume
docker run -v cli-test_volume:. cli_test cli run world
docker volume inspect cli_test_volume # will show where the volume is located.
For both of these approaches you will need to write the logs in a different path than the application. Otherwise, app code will be overwritten by the shared volume.
There is another alternative, which is to copy files from the container using create and cp:
docker create --name cli_test_instance cli_test run world
docker start cli_test_instance
docker cp cli_test_instance:log.log .
Have you tried this?
docker logs cli_test
EDIT: Sorry, I missed this the first time, but in order for this to work, you'll have to log to STDERR, not to a log file. (Thanks #Gonzalo Matheu
for pointing this out.) To get it working, it should be as simple as making this small additional change:
logging.basicConfig() # note no file name
My goal is to run a flask webserver from a Docker container. Working on a Windows machine this requires Vagrant for creating a VM. Running vagrant up --provider=docker leads to the following complaint:
INFO interface: error: The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
This is my Dockerfile
FROM mrmrcoleman/python_webapp
EXPOSE 5000
# Install Python
RUN apt-get install -y python python-dev python-distribute python-pip
# Add and install Python modules
RUN pip install Flask
#copy the working directory to the container
ADD . /
CMD python run.py
And this is the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "." #searches for a local dockerfile
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
rsync__chown = false
end
Because the Vagrantfile and run.py work without trouble independently, I suspect I made a mistake in the Dockerfile. My question is twofold:
Is there something clearly wrong with the Dockerfile or the
Vagrantfile?
Is there a way to have vagrant/docker produce more
specific error messages?
I think the answer I was looking for is using the command
vagrant docker-logs
I broke the Dockerfile because I did not recognize good behaviour as such, because nothing really happens if the app runs as it should. docker-logs confirms that the flask app is listening for requests.
Is there something clearly wrong with the Dockerfile or the Vagrantfile?
Your Dockerfile and Vagrantfiles look good, but I think you need to modify the permissions of run.py to be executable:
...
#copy the working directory to the container
ADD . /
RUN chmod +x run.py
CMD python run.py
Does that work?
Is there a way to have vagrant/docker produce more specific error messages?
Try taking a look at the vagrant debugging page. Another approach I use is to log into the container and try running the script manually.
# log onto the vm running docker
vagrant ssh
# start your container in bash, assuming its already built.
docker run -it my/container /bin/bash
# now from inside your container try to start your app
python run.py
Also, if you want to view your app locally, you'll want to add port forwarding to your Vagrantfile.