I have a lot docker containers and my idea is to have one ssh server and when type ssh <containerid>#myserver it actually do docker attach to specific container. What I need is way how to after running someuser#host runs python script which make a tunnel to docker container. Same way git works using SSH.
Related
I've managed to deploy a Django app inside a docker container on my personal Mac using localhost with Apache. For this, I use docker-compose with the build and up commands. I'm trying to run the same Django app on a CentOS server using a docker image generated on my local machine. Apache is also running on the server on port 90.
docker run -it -d --hostname xxx.xxx.xxx -p 9090:9090 --name test idOfImage
How can I access this container with Apache using the hostname and port number in the URL? Any help would be greatly appreciated. Thanks.
From other containers the best way to access this container is to attach both to the same network and use the container's --name as a DNS name and the internal port (the second port from the -p option, which isn't strictly required for this case); from outside a container or from other hosts use the host's IP address or DNS name and the published port (the first port from the -p option).
The docker run --hostname option isn't especially useful; the only time you'd want to specify it is if you have some magic licensed software that only ran if it had a particular hostname.
Avoid localhost in a Docker context, except for the very specific case where you know you're running a process on the host system outside a container and you're trying to access a container's published port or some other service running on the host. Don't use "localhost" as a generic term, it has a very specific context-dependent meaning (every process believes it's running "on localhost").
I am using docker-compose with:
an existing (python) app running inside a docker container.
another (ruby) command-line app running in a docker container.
How do I 'connect' those two containers so that the python container can call the command-line app in the ruby container? (and pass arguments via stdin/stdout)
Options are available, but not great. If you're using a recent version of Docker Compose then both containers will be in the same Docker network and can communicate, so you could install sshd in the destination container and make ssh calls from the source container.
Alternatively, use Docker in Docker with the source container, so you can run docker exec inside the source and execute commands on the target container.
It's low-level communication though, and raising it to a service call or message passing would be better, if changing your apps is feasible.
How can you make your local Django development server think it's running inside your AWS network using SSH tunneling?
My scenario, I'm running a local Django server i.e. python manage.py runserver and Redis as cache backend (Elasticache). When my app runs in the AWS environment it has access to Elasticache, but, Locally it won't (and that's a good thing). If for some reason I want to test my local environment with Elasticache I need to somehow use SSH tunneling to make AWS think it's running inside the VPC network.
I've tried to get this working by using below. I've confirmed I can connect locally using SSH tunneling with Redis Desktop Manager so 100% I know AWS supports this, my problem is now doing the same thing with Django.
This is what I've tried:
> python manage.py runserver 8000
> ssh -i mykey.pem ec2-user#myRandomEC2.com -L 6379:localhost:6379
I get the message "Error 60 connecting to" message when I visit http://127.0.0.1:8000/.
What I'm I doing wrong here?
Notes:
ec2-user#myRandomEC2.com is not the Redis server, just another
EC2 instance on AWS that has access Elasticache that I want to use as a
tunnel.
The mykey.pem has access and the correct permission.
The ec2 instance has all the correct permissions and ports for access.
Tested SSH tunneling with Redis Desktop Manager and this works for
that software.
Elasticache and the EC2 instances are all in the same region and can connect to each other.
ssh -i mykey.pem ec2-user#myRandomEC2.com -L 6379:localhost:6379
This will forward requests from your local machine (on :6379) to localhost:6379 on the EC2 instance. This is not what you want (unless you have redis running locally on the EC2 instance)
You should use the Elasticache IP instead
ssh -i mykey.pem ec2-user#myRandomEC2.com -L 6379:<elasticache-ip>:6379
I have two applications:
a Python console script that does a short(ish) task and exits
a Flask "frontend" for starting the console app by passing it command line arguments
Currently, the Flask project carries a copy of the console script and runs it using subprocess when necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.
I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.
Q: How can I spawn a container with a short lived task from inside a container?
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need a docker client installed on the container as well.
A simple example of a container that can execute docker commands on the host:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image
It's important to note that this is not the same as running a docker daemon in a container. For that you need a solution like jpetazzo/dind.
I am trying to connect a script on a Docker host to a script on a Docker container.
The scripts are using Python's remote queue manager, and they work perfectly outside of Docker, so I'm quite sure the issue is with my Docker configuration or my understanding of Docker port forwarding.
The script on the container binds correctly to (localhost,5800), and I verified the script does not crash.
I've tried getting the script to connect to the IP address of the container on port 5800, and that doesn't work (Connection refused). I've also tried using the -p flag and forwarding 5800 to a random port, then connecting to (localhost,randomport) from the Docker host and that doesn't work either (Connection refused).
Again, the script is definitely running, since the issue occurs even when I get a shell on the container and manually launch the script, ensuring it successfully launches the server and does not shut it down.
To me this seems like the exact same problem as running a webserver within a Docker container. Why is this not working? The scripts work outside of Docker just fine.
https://github.com/hashme/thistle/tree/flask_thistle
(see room.py for container script and app.py for host script; I'm not running the scripts exactly but hacking away in a REPL, so I've adjusted many parameters without success)
To replicate the problem, first run ./container.sh, then (in a REPL) import app and create a MessagePasser with some IP address and port number. Running the app.py script does nothing.
The script on the container binds correctly to (localhost,5800)
You need to make sure that within the container the script binds to the "0.0.0.0" (all interfaces) address, not localhost (loopback). Otherwise it won't be able to accept any external connections.