I have an ECS task setup which, when with a Command override ls, produces expected results with my CloudWatch log stream: test.py. my script test.py takes one parameter. I am wondering how I can execute this script with python3 (which exists in my container) using the command override. Essentially, I want to execute the command:
python3 test.py hello
how can I do this?
Here's how I did something similar:
In your docker build file, make the command you want to run as the last instruction. In your case:
CMD python3 test.py hello
To make it more extensible, use environment variables. For instance, do something like:
CMD ["python3", "test.py"]
But make the parameter come from an environment variable you pass into the container definition in your task.
Related
I recently started using snakemake and would like to run a shell script in my snakefile. However, I'm having trouble accessing input, output and params. I would appreciate any advice!
Here the relevant code snippets:
from my snakefile
rule ..:
input:
munged = 'results/munged.sumstats.gz'
output:
ldsc = 'results/ldsc.txt'
params:
mkdir = 'results/ldsc_results/',
ldsc_sumstats = '/resources/ldsc_sumstats/',
shell:
'scripts/run_gc.sh'
and the script:
chmod 770 {input.munged}
mkdir -p {params.mkdir}
ldsc=$(ls {params.ldsc_sumstats})
for i in $ldsc; do
...
I get the following error message:
...
chmod: cannot access '{input.munged}': No such file or directory
ls: cannot access '{params.ldsc_sumstats}': No such file or directory
...
The syntax of using {} statements applies only to shell scripts defined within Snakefile, while in the example you provide, the script is defined externally.
If you want to use the script as an external script you will need to pass the relevant arguments (and parse them inside the shell script). Otherwise, it should be possible to copy-paste the script content inside the shell directive and let snakemake substitute the {} variables.
From v7.14.0 Snakemake now supports executing external Bash scripts, with access to snakemake objects inside. See the docs for example usage.
Consider this issue.py file:
import subprocess
print('Calling subprocess...')
subprocess.run(['python', '--version'])
print('Subprocess is done!')
Executing python issue.py manually yields what I expect:
Calling subprocess...
Python 3.9.0
Subprocess is done!
However, if I execute this inside a Docker container, something weird happens:
$ docker run --rm -v $(pwd):/issue python:3.9.0 python /issue/issue.py
Python 3.9.0
Calling subprocess...
Subprocess is done!
How can I fix this, to make Docker respect the correct output order?
Notes:
This problem also happens with stderr, although the above MCVE does not show it.
The MCVE uses the python image directly but in my real use case I have a custom image from a custom Dockerfile which uses FROM python.
Using capture_output=True in the subprocess.run call and then printing the captured output is not an option for me because my real use case invokes a subprocess that prints information over time to stdout (unlike python --version) and I cannot wait for it to complete to print the entire output only after that.
As #DavidMaze pointed out in a comment, I just needed to set the PYTHONUNBUFFERED environment variable to 1. This can be done for example with:
docker run --rm -e PYTHONUNBUFFERED=1 -v $(pwd):/issue python:3.9.0 python /issue/issue.py
I am using pipenv for managing my packages. I want to write a python script that calls another python script that uses a different Virtual Environment(VE).
How can I run python script 1 that uses VE1 and call another python script (script2 that uses VE2).
I found this code for the cases where there is no need for changing the virtual environment.
import os
os.system("python myOtherScript.py arg1 arg2 arg3")
The only idea that I had was simply navigating to the target project and activate shell:
os.system("cd /home/mmoradi2/pgrastertime/")
os.system("pipenv shell")
os.system("python test.py")
but it says:
Shell for /home/..........-GdKCBK2j already activated.
No action taken to avoid nested environments.
What should I do now?
in fact my own code needs VE1 and the subprocess (second script) needs VE2. How can I call the second script inside my code?
In addition, the second script is used as a command line tool that accepts the inputs with flags:
python3 pgrastertime.py -s ./sql/postprocess.sql -t brasdor_c_07_0150
-p xml -f -r ../data/brasdor_c_07_0150.object.xml
How can I call it using the solution of #tzaman
Each virtualenv has its own python executable which you can use directly to execute the script.
Using subprocess (more versatile than os.system):
import subprocess
venv_python = '/path/to/other/venv/bin/python'
args = [venv_python, 'my_script.py', 'arg1', 'arg2', 'arg3']
subprocess.run(args)
I built an AWS Batch compute environment. I want to run a python script in jobs.
Here is the docker file I'm using :
FROM python:slim
RUN apt-get update
RUN pip install boto3 matplotlib awscli
COPY runscript.py /
ENTRYPOINT ["/bin/bash"]
The command in my task definition is :
python /runscript.py
When I submit a job in AWS console I get this error in CloudWatch:
/usr/local/bin/python: /usr/local/bin/python: cannot execute binary file
And the job gets the status FAILED.
What is going wrong? I run the container locally and I can launch the script without any errors.
Delete your ENTRYPOINT line. But replace it with the CMD that says what the container is actually doing.
There are two parts to the main command that a Docker container runs, ENTRYPOINT and CMD; these are combined together into one command when the container starts. The command your container is running is probably something like
/bin/bash python /runscript.py
So bash finds a python in its $PATH (successfully), and tries to run it as a shell script (leading to that error).
You don't strictly need an ENTRYPOINT, and here it's causing trouble. Conversely there's a single thing you usually want the container to do, so you should just specify it in the Dockerfile.
# No ENTRYPOINT
CMD ["python", "/runscript.py"]
You can try with following docker file and task definition.
Docker File
FROM python:slim
RUN apt-get update
RUN pip install boto3 matplotlib awscli
COPY runscript.py /
CMD ["/bin/python"]
Task Definition
['/runscript.py']
By passing script name in task definition will give you flexibility to run any script while submitting a job. Please refer below example to submit a job and override task definition.
import boto3
session = boto3.Session()
batch_client = session.client('batch')
response = batch_client.submit_job(
jobName=job_name,
jobQueue=AWS_BATCH_JOB_QUEUE,
jobDefinition=AWS_BATCH_JOB_DEFINITION,
containerOverrides={
'command': [
'/main.py'
]
}
)
I am writing a python script and i want to execute some code only if the python script is being run directly from terminal and not from any another script.
How to do this in Ubuntu without using any extra command line arguments .?
The answer in here DOESN't WORK:
Determine if the program is called from a script in Python
Here's my directory structure
home
|-testpython.py
|-script.sh
script.py contains
./testpython.py
when I run ./script.sh i want one thing to happen .
when I run ./testpython.py directly from terminal without calling the "script.sh" i want something else to happen .
how do i detect such a difference in the calling way . Getting the parent process name always returns "bash" itself .
I recommend using command-line arguments.
script.sh
./testpython.py --from-script
testpython.py
import sys
if "--from-script" in sys.argv:
# From script
else:
# Not from script
You should probably be using command-line arguments instead, but this is doable. Simply check if the current process is the process group leader:
$ sh -c 'echo shell $$; python3 -c "import os; print(os.getpid.__name__, os.getpid()); print(os.getpgid.__name__, os.getpgid(0)); print(os.getsid.__name__, os.getsid(0))"'
shell 17873
getpid 17874
getpgid 17873
getsid 17122
Here, sh is the process group leader, and python3 is a process in that group because it is forked from sh.
Note that all processes in a pipeline are in the same process group and the leftmost is the leader.