I am trying to run a Python software on a windows system using Docker. For context, I am starting an internship in a couple of weeks during which I will be using the Python software OpenMC to model neutronics (https://docs.openmc.org/en/stable/). I believe the software was written for Linux, so to run on a windows machine, I need to go through Docker. For the life of me, I cannot get this to work.
The main issue is that I cannot figure out how to actually execute a Python script within this Docker container. The primary instructions for this specific software (OpenMC) are in the Quick-Installation instructions and the Developer's Guide, both linked here:
https://docs.openmc.org/en/stable/quickinstall.html
https://docs.openmc.org/en/stable/devguide/docker.html
I am able to go through all the steps of the Developer's guide, but once I am in this "interactive shell" I don't understand how to execute a Python script that I've written on my machine. I've been stumped on this issue for the better part of a week, and could really use some guidance. I am verging on desperation here as I really need to get my feet wet with this software before I start working, and right now I can't even get it to run.
Thank you for your time, and let me know if I can clarify anything.
As mentioned above, I figured this out:
Figured this out. The key was to use an absolute filepath instead of a regular filepath on the volume mount, i.e.
docker run -it --name=my_openmc1 --rm -v $pwd/path:/containerdir [image]
instead of:
docker run -it --name=my_openmc1 --rm -v path:/containerdir [image]
Related
I have a python script in one of my docker containers. I'm trying to log errors that occur during execution of the script:
with open('logs.txt', 'a+') as filehandle:
filehandle.write('Error message here')
Locally, logs.txt is created when I run : python path/to/script.py. However, when I run the script from docker like so; docker-compose exec service_name python path/to/script.py, I can't locate the logs file.
I have gone through a lot of documentation about bound mounts, volumes and other stuff. However, none of these are helping.
I need help with locating the logs.txt file. It'd also be great to get info about the 'right way' of storing such data.
Edit: Here's what I've tried so far
I already tried to explore the contents of my container via: docker exec -it container_name /bin/sh. I still couldn't find logs.txt.
PS: I'm new to docker, so please forgive my ignorance.
I know similar questions have been asked but I couldn't get it working or it was not specific enough for me since I am fairly new to dockers. The question is similar to the question in this thread How to move Docker containers between different hosts? but I don't fully understand the answer or I can't get it working.
My problem: I am using docker Desktop to run a python script locally in a container. But I want this python script to be able to run on a windows server 2016. The script is a short webscraper which creates a csv file.
I am aware I need to install some sort of docker on the webserver and I need to export my container and be able to load in the container at the webserver.
In the thread referred above it says that I need to use docker commit psscrape but when I try to use it.
I get: "Error response from daemon: No such container: psscraper." This is probably since the container has ran but stopped. Since the program runs only for a few seconds. psscraper is in the 'docker ps -a' list but not in the 'docker ps' list. I guess it has something to do with that.
psscraper is the name of the python file.
Is there anyone who could enlighten me on how to proceed?
So there are variants of this question - but none quite hit the nail on the head.
I want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.
In short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.
Update:
I have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.
Leaving previous version of question up, as that is still useful.
The docker process is a little intimidating as does paramiko. What are my options?
(Spyder maintainer here) What you need to do is to create an Spyder kernel in your remote server and connect through SSH to it. That's the only facility we provide to do what you want.
You can find the precise instructions to do that in our docs.
I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code.
I'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source).
My solution was:
1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a "payload" Python script to run, and a pickle file with input data.
2) The "payload" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.
3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.
There were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.
Edit:
A second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)
On my Windows 10 machine, I am developing a database manager. Because the backend uses LDAP and the required development libraries are only available for Linux, I want to use Docker to set up an environment with the appropriate libs.
I managed to write a Dockerfile and compose file, that launch the (currently very basic) Django app in a Docker container with all the libs necessary.
I would like to play around with the django-ldapdb package and for that I want to apply the migrations.
When I open PyCharm's terminal and try to execute python manage.py migrate, I get an error telling me that the module ldapdb is not found. I suspect this is because the command does not use the remote Docker interpreter I set up with PyCharm.
The other thing I tried is using PyCharm's dedicated manage.py console. This does not initialize properly. It says the working directory is invalid and needs to be an absolute path, although the path it shows it the absolute path to the project.
I have to admit that I have no idea how this remote interpreter works and I don't see any Docker container running, so I might have not understood something properly here. I even tried running the app using PyCharm's Django run config, which started a container, but still I get the same errors.
I googled a lot, but I couldn't find more infos about remote interpreters nor something solving my issue.
The only way I managed to do this, is by executing the command inside the container.
To get inside a container named contr, use the docker command
docker exec -ti contr /bin/bash
I am trying to connect to Heroku bash on windows and use python manage.py shell iPython shell to help me type commands faster.
While this works fine if I am on Mac, on my Windows machine - the colour coding of heroku bash iPython shell and the tab-autocomplete feature does not work.
Is there some other tool I can use or can configure somewhere? I tried installing bash on Windows, and it gives me the same result
You should NOT, NOT, NOT!!#!!!!! be using Heroku's bash shell for casual python coding. This is an awful idea for many reasons:
Heroku dynos don't have a persistent filesystem. Any files on your dyno can be deleted randomly.
The amount of time it will take you to build an ipython setup / configuration from scratch, and get it running on your dyno is not worth the effort.
Heroku dynos are meant to run web processes as they restart randomly -- your terminal session may blow up at any point.
If you really want an authentic 'shell' experience, I recommend using a real shell for development -- either get yourself a virtual machine and install ubuntu, or spin up a ubuntu server machine through a host like DigitalOcean or Amazon.