I am trying to connect a script on a Docker host to a script on a Docker container.
The scripts are using Python's remote queue manager, and they work perfectly outside of Docker, so I'm quite sure the issue is with my Docker configuration or my understanding of Docker port forwarding.
The script on the container binds correctly to (localhost,5800), and I verified the script does not crash.
I've tried getting the script to connect to the IP address of the container on port 5800, and that doesn't work (Connection refused). I've also tried using the -p flag and forwarding 5800 to a random port, then connecting to (localhost,randomport) from the Docker host and that doesn't work either (Connection refused).
Again, the script is definitely running, since the issue occurs even when I get a shell on the container and manually launch the script, ensuring it successfully launches the server and does not shut it down.
To me this seems like the exact same problem as running a webserver within a Docker container. Why is this not working? The scripts work outside of Docker just fine.
https://github.com/hashme/thistle/tree/flask_thistle
(see room.py for container script and app.py for host script; I'm not running the scripts exactly but hacking away in a REPL, so I've adjusted many parameters without success)
To replicate the problem, first run ./container.sh, then (in a REPL) import app and create a MessagePasser with some IP address and port number. Running the app.py script does nothing.
The script on the container binds correctly to (localhost,5800)
You need to make sure that within the container the script binds to the "0.0.0.0" (all interfaces) address, not localhost (loopback). Otherwise it won't be able to accept any external connections.
Related
I am running succesfully a django app that is hosted inside a docker container. I change something on my code on purpose in order for my code to break. What I need is somehow to see the log of the running code as if I was running this locally on my computer. For example I forgot to import a library and when I run this locally I get a message on the terminal like "ModuleNotFoundError: No module named 'somemodule'". But when i run the same code from inside the container I get no log, just the container fails to start.
My question is: How can I get a log for my script from inside the container, so I can debug my code?
So, what I wanted to do was to somehow debug/run my own python code that was inside a container in order to see the log of my code.
I managed to do it using VSC and Remote SSH and Remote - Containers extensions.
Remote SSH
Remote - Containers
If the containers are hosted locally on your PC, you dont need the Remote - SSH extension
My situation is that I have set up a container in a remote server, and inside the container, there is a virtual environment. I'm using the python interpreter inside this virtual environment in this container, not the one on the host.
From my local machine, I can open up PyCharm, and use Tools->Deployment->Configuration to easily set up a remote connection. And For a specific project, I can set up the interpreter by clicking Files->Settings->Project Interpreter. However, it seems that I can only select the host Python interpreter(/usr/bin/python) on the remote server, not the one inside the virtual environment in the container. How could I set up using this interpreter?
I googled but can't find exact solution. I don't think I need to install Docker locally because my Docker is on the remote server side, right?
In similar way you are connecting to remote host - you would need to setup container with same capabilities e.g. set ssh server running on there. Then you should expose the port into public world or use nested ssh tunnel, which would be better alternative.
Another interesting approach (maybe recommended) is to forward Docker socket from the remote machine so, that you local Docker CLI uses this socket for sending commands to remote host. Theoretically, then you could add this container directly in PyCharm, when you set correct Docker host address there.
Further, virtual environments on other than local host systems are not supported natively by PyCharm. However, you could try to add path of python and see if it works e.g. venv/bin/python from project directory.
I'm currently running a locally hosted website, created in flask as part of a project. I wanted the server to be externally visible, so I checked the docs, and found this:
If you run the server you will notice that the server is only
accessible from your own computer, not from any other in the network.
This is the default because in debugging mode a user of the
application can execute arbitrary Python code on your computer.
If you have the debugger disabled or trust the users on your network,
you can make the server publicly available simply by adding
--host=0.0.0.0 to the command line:
$ flask run --host=0.0.0.0 This tells your operating system to listen
on all public IPs.
I assumed the same applied to python, and when started the server like this
if __name__ == "__main__":
app.debug=True
app.run(host='0.0.0.0')
When running this the terminal said Running on http://0.0.0.0:5000/, however clicking on that link just gave a "this site can't be reached" error. I tried to go the default address that it normally directed to (127.0. ...) and although this launched the site on my laptop (where flask is running) I still couldn't access the site from other devices on the same network, even when copy-pasting the url.
I also tried typing my laptops ipv4 (192.168. ...) followed by :5000 but still was unable to connect to the server.
What have I done wrong?
Run this in your terminal:
sudo ufw allow 5000
I have a remote machine at my workplace, when we developers run server/ or docker containers. everything was working fine but a while back somethign went wrong.
if I run the python flask app
from app import app
app.run(host='0.0.0.0', port=5050)
i get message
* Running on http://0.0.0.0:5050/
and I am able to access the above from my local machine using the remote server machine ip:5050 but if I run docker container docker run -itd <conta_image_name> -p 80:90 --add-host=localdomain.com:machine_ip_address i get error message saying IPv4 forwarding is disabled. Networking will not work.
Now this issue is in production so I really need someone to throw up some light, what might be wrong or let me know what more info I need to put.
I have fixed this issue myself following this: https://success.docker.com/article/ipv4-forwarding
Another solution is..
Try adding -net=host along with docker run command
https://medium.com/#gchandra/docker-ipv4-forwarding-is-disabled-8499ce59231e
I have a lot docker containers and my idea is to have one ssh server and when type ssh <containerid>#myserver it actually do docker attach to specific container. What I need is way how to after running someuser#host runs python script which make a tunnel to docker container. Same way git works using SSH.