How to run a command in Docker using custom arguments?
I'm trying to run a command that causes django to rotate using an environment variable through an argument in the act of creating the server.
Thank you very much for your attention.
I need to run the command in this format to work.
# VAR=enviroment_name python manage.py migrate --database=01_sistema
docker
docker exec 24e2b5c60a79 VAR=enviroment_name python manage.py migrate --database=01_sistema
Error
OCI runtime exec failed: exec failed: container_linux.go:344: starting
container process caused "exec: \"VAR=enviroment_name\": executable
file not found in $PATH": unknown
In bash you set environment by appending key=value to command. However, this is not the case for docker. You can pass environment to docker exec by adding argument -e key=value (can be specified multiple times). In your case, that is
docker exec -e VAR=enviroment_name 24e2b5c60a79 python manage.py migrate --database=01_sistema
Related
I've tried using the std.flush() to flush the print statements and importing logging library and using logger.info(). Neither worked.
I'm dealing with legacy code that adds a logger.info() statement to print to the host console, but when I try to add more, they don't print.
I am running the project using a Docker file to build the image, the Dockerfile copies three .py files needed to run the application.
I use this run command:
docker run -it -p 13801:13800 --net=kv_subnet --ip=10.10.0.4 --name="node1" -e ADDRESS="10.10.0.4:13800" -e VIEW="10.10.0.4:13800" kvs
I built an AWS Batch compute environment. I want to run a python script in jobs.
Here is the docker file I'm using :
FROM python:slim
RUN apt-get update
RUN pip install boto3 matplotlib awscli
COPY runscript.py /
ENTRYPOINT ["/bin/bash"]
The command in my task definition is :
python /runscript.py
When I submit a job in AWS console I get this error in CloudWatch:
/usr/local/bin/python: /usr/local/bin/python: cannot execute binary file
And the job gets the status FAILED.
What is going wrong? I run the container locally and I can launch the script without any errors.
Delete your ENTRYPOINT line. But replace it with the CMD that says what the container is actually doing.
There are two parts to the main command that a Docker container runs, ENTRYPOINT and CMD; these are combined together into one command when the container starts. The command your container is running is probably something like
/bin/bash python /runscript.py
So bash finds a python in its $PATH (successfully), and tries to run it as a shell script (leading to that error).
You don't strictly need an ENTRYPOINT, and here it's causing trouble. Conversely there's a single thing you usually want the container to do, so you should just specify it in the Dockerfile.
# No ENTRYPOINT
CMD ["python", "/runscript.py"]
You can try with following docker file and task definition.
Docker File
FROM python:slim
RUN apt-get update
RUN pip install boto3 matplotlib awscli
COPY runscript.py /
CMD ["/bin/python"]
Task Definition
['/runscript.py']
By passing script name in task definition will give you flexibility to run any script while submitting a job. Please refer below example to submit a job and override task definition.
import boto3
session = boto3.Session()
batch_client = session.client('batch')
response = batch_client.submit_job(
jobName=job_name,
jobQueue=AWS_BATCH_JOB_QUEUE,
jobDefinition=AWS_BATCH_JOB_DEFINITION,
containerOverrides={
'command': [
'/main.py'
]
}
)
I'm trying to send files as an argument in python3 cli app (which uses argp arse for parsing) which is hosted in docker. But I'm getting OSError:
Error opening b'input_file.txt' when I perform docker run -t
input_file.txt
I tried:
docker run -t docker_image_name input_file.txt
My docker file has entry point as:
ENTRYPOINT [ "python", "/src/cli_app.py" ]
You're telling your python application to look for input_file.txt, but that file doesn't exist in the container. You're not passing a file as is, just an argument/parameter. Try the following to mount your local file (I'm assuming it's in your working directory) into the container:
docker run -it -v $(pwd)/input_file.txt:/tmp/input_file.txt docker_image_name /tmp/input_file.txt
docker run docker_image_name -e IP_FILE_NAME="input_file.txt"
In your python code access the filename as environment variable $IP_FILE_NAME. This is considering the input_file.txt is present in the container. Use COPY command to copy the file from the machine you use to build to the container.
COPY input_file.txt /src/input_file.txt
I'm writing unit tests for a Django App. I'm using sublime text. My app is set up to run in a docker container. To run the tests currently I have to go into the docker container sudo docker exec -it {containerID} /bin/bash and then run python manage.py test polls.
Is there some way to do this from sublime text's build system?
I know I could set up the whole app to run out side the container then just command-B to build and run locally, but I want to run in the container.
I created a build system with the following:
{
"shell_cmd": "docker-compose exec -T web sh -c 'python manage.py test polls'"
}
That runs the tests. There's probably a way to get the app name and substitute it for polls, but for now this seems to work.
How do you export environment variables in the command executed by Supervisor? I first tried:
command="export SITE=domain1; python manage.py command"
but Supervisor reports "can't find command".
So then I tried:
command=/bin/bash -c "export SITE=domain1; python manage.py command"
and the command runs, but this seems to interfere with the daemonization since when I stop the Supervisor daemon, all the other daemons it's running aren't stopped.
To add a single environment variable, You can do something like this.
[program:django]
environment=SITE=domain1
command = python manage.py command
But, if you want to export multiple environment variables, you need to separate them by comma.
[program:django]
environment =
SITE=domain1,
DJANGO_SETTINGS_MODULE=foo.settings.local,
DB_USER=foo,
DB_PASS=bar
command = python manage.py command
Just do it separately:
environment=SITE=domain1
command=python manage.py command
Refer to http://supervisord.org/subprocess.html#subprocess-environment for more info.