I have a python script in one of my docker containers. I'm trying to log errors that occur during execution of the script:
with open('logs.txt', 'a+') as filehandle:
filehandle.write('Error message here')
Locally, logs.txt is created when I run : python path/to/script.py. However, when I run the script from docker like so; docker-compose exec service_name python path/to/script.py, I can't locate the logs file.
I have gone through a lot of documentation about bound mounts, volumes and other stuff. However, none of these are helping.
I need help with locating the logs.txt file. It'd also be great to get info about the 'right way' of storing such data.
Edit: Here's what I've tried so far
I already tried to explore the contents of my container via: docker exec -it container_name /bin/sh. I still couldn't find logs.txt.
PS: I'm new to docker, so please forgive my ignorance.
Related
I'm trying to execute this jar file https://github.com/RMLio/rmlmapper-java from Airflow, but for some reason it is failing straight away. I'm using a PythonOperator to execute some python code, and inside it I have a subprocess call to the java command.
Test command is:
java -jar /root/airflow/dags/rmlmapper-6.0.0-r363-all.jar -v
I'm running Airflow inside a Docker container. The weird thing is that if I execute the exact same command inside the container it works fine.
I tried a bit of everything but the result is always the same: SegFault 139
The memory of the container seems to be fine so it shouldn't be directly related to some OOM issue. I also tried to reset default memory in the Docker compose file with no success.
My suggestion is that the java application somehow tries to load some files which are stored locally inside the jar file, but for some reason maybe Airflow changes the 'user.dir' directory and therefore it is not able to find them and it fails.
I'm really out of ideas so any help will be highly appreciated. Thank you.
I am trying to run a Python software on a windows system using Docker. For context, I am starting an internship in a couple of weeks during which I will be using the Python software OpenMC to model neutronics (https://docs.openmc.org/en/stable/). I believe the software was written for Linux, so to run on a windows machine, I need to go through Docker. For the life of me, I cannot get this to work.
The main issue is that I cannot figure out how to actually execute a Python script within this Docker container. The primary instructions for this specific software (OpenMC) are in the Quick-Installation instructions and the Developer's Guide, both linked here:
https://docs.openmc.org/en/stable/quickinstall.html
https://docs.openmc.org/en/stable/devguide/docker.html
I am able to go through all the steps of the Developer's guide, but once I am in this "interactive shell" I don't understand how to execute a Python script that I've written on my machine. I've been stumped on this issue for the better part of a week, and could really use some guidance. I am verging on desperation here as I really need to get my feet wet with this software before I start working, and right now I can't even get it to run.
Thank you for your time, and let me know if I can clarify anything.
As mentioned above, I figured this out:
Figured this out. The key was to use an absolute filepath instead of a regular filepath on the volume mount, i.e.
docker run -it --name=my_openmc1 --rm -v $pwd/path:/containerdir [image]
instead of:
docker run -it --name=my_openmc1 --rm -v path:/containerdir [image]
I'm running airflow webserver inside Docker to run some python scripts, but when some file in /var catalog is created after webserver was started, these python scripts inside DAG-s simply doesn't see any changes, for example, a set of commands produces that results (everything is done inside docker container):
touch /var/test/teestfile
and after that, python from shell:
os.listdir('/var/test') returns list
['testfile']
python script inside airflow dag:
os.listdir('/var/test') returns list []
If something was initially in var directory, for example, volume attached from the host with docker.sock, or system files, both shell, and airflow see that files.
Sorry for my English, or if something isn't clear, and thanks for the help.
I know similar questions have been asked but I couldn't get it working or it was not specific enough for me since I am fairly new to dockers. The question is similar to the question in this thread How to move Docker containers between different hosts? but I don't fully understand the answer or I can't get it working.
My problem: I am using docker Desktop to run a python script locally in a container. But I want this python script to be able to run on a windows server 2016. The script is a short webscraper which creates a csv file.
I am aware I need to install some sort of docker on the webserver and I need to export my container and be able to load in the container at the webserver.
In the thread referred above it says that I need to use docker commit psscrape but when I try to use it.
I get: "Error response from daemon: No such container: psscraper." This is probably since the container has ran but stopped. Since the program runs only for a few seconds. psscraper is in the 'docker ps -a' list but not in the 'docker ps' list. I guess it has something to do with that.
psscraper is the name of the python file.
Is there anyone who could enlighten me on how to proceed?
I am building a webapp (a simple flask site) that uses docker. I want my development code to not reside within docker, but be executed by the development environment (using python3) I have defined in my dockerfile. I know that I can use the COPY . . syntax in a dockerfile to copy my source code into the image for execution, but that violates my aim of separating the container from my source. Is there a way to have a docker container read and execute the code that it is in the directory I run the docker container run command from?
Right now my container uses the copy company to copy all the source code into the container. It then uses the CMD command to automatically run the flask app:
CMD [ "python", "flask_app/server.py" ]
(I'm storing all my flask code in a directory called flask_app). I'm assuming this works because all this has been copied into the container (according to the specifications given in the dockerfile) and is being executed when I run the container. I would like for the container to instead access and execute flask_app/server.py without copying this information into itself -- is this possible? If so, how?
Instead of using COPY to move the code into the container, you'll use a "bind mount" (https://docs.docker.com/storage/bind-mounts/).
When you run the container, you'll do it with a command like this:
docker run --mount type=bind,source=<path_outside_container>,target=<path_inside_container> <image_tag>
For portability, I recommending putting this line in a script intended to be run from the repository root, and having the <path_outside_container> be "$(pwd)", so that it will work on other people's computers. You'll need to adjust <path_inside_container> and your CMD depending on where you want the code to live inside the container.
(Obviously you can also put whatever other options you'd like on the command, like --it --rm or -p <whatever>.)