I have this python server:
import SocketServer
class TCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
print 'client is connected'
data = self.request.recv(1024)
print data
self.request.sendall('Message received!')
HOST, PORT = '0.0.0.0', 5004
server = SocketServer.TCPServer((HOST, PORT), TCPHandler)
print 'Listening on port {}'.format(PORT)
server.serve_forever()
and this client:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('192.168.56.101', 5004))
s.sendall('Hello my name is John')
response = s.recv(1024)
print 'Response: {}'.format(response)
My OS is macOS and I have Ubuntu 14.04 installed on a virtual machine using VirtualBox. In VirtualBox, I setup a NAT network and I gave Ubuntu this IP address: 192.168.56.101. I put the server program on Ubuntu and added a rule in IPTables to allow incoming connections from port 5004. I started the server on Ubuntu and I tried to connect to the server using the client above on my macOS. The connection went through and the data exchange finished successfully.
Now to the problem. I installed Docker on my virtualized Ubuntu. Docker itself uses another version of Ubuntu 14.04. What I want is to run the server inside the Dockrized version of Ubuntu, so I wrote this Dockerfile:
FROM bamos/ubuntu-opencv-dlib-torch:ubuntu_14.04-opencv_2.4.11-dlib_19.0-torch_2016.07.12
ADD . /root
EXPOSE 5004
CMD ["python2", "/root/server.py"]
I build it using this command: sudo docker build -t boring91/mock and it was built successfully. I ran the Docker container using this command: sudo docker run -p 5004:5004 -t boring91/mock and it showed that it is started listening on port 5004. When I tried to connect to it using my client on my macOS, the socket connected but no data exchange happened. Same thing happens when I run the client on the virtualized Ubuntu. Can anyonw tell me whats the issue here?
Related
so I do have the next code, which is a Flask app, and it runs on a Raspberry inside a docker container with NginX and Uwsgi.
I need to be able to connect to local ssh and also to get the correct MAC address of the Raspberry.
import socket
import time
import pwd
import os
from ssh2.session import Session
from flask import Flask
from flask import request
app = Flask(__name__)
usernameSSH = pwd.getpwuid(os.getuid()).pw_name
passwordSSH = "password"
host = 'localhost' # the specified host for the SSH
#app.route("/")
def hello():
return gma() #get mac address of the device
#app.route("/ssh") #This is the connection to ssh
def welcome():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, 22))
session = Session()
session.handshake(sock)
session.userauth_password(usernameSSH, passwordSSH)
channel = session.open_session()
channel.shell()
channel.write("ifconfig\n")
time.sleep(2)
size, data = channel.read()
channel.close()
return data.decode("utf-8")
if __name__ == "__main__":
app.run(threaded=True)
So. If I run my docker image like this:
docker run -p 3134:80 dockapp
I'm able to connect with the external IP and port. But the mac address that I get on external IP:port/ it's wrong, it's the mac address of the container. Also, I get auth error when trying to connect to ssh.
If I run my image like this:
sudo docker run -p 3134:80 --network=host dockapp
I'm able to reach the API only by typing the local address of the RaspBerry but I do get the correct MAC Addres but still I'm not able to connect to local SSH and I get an authError. The program is running fine without docker. Any solutions ?
Also, adding host="0.0.0.0" to the Flask App hasn't helped at all
DockerFile:
FROM cseelye/rpi-nginx-uwsgi-flask
MAINTAINER Carl Seelye <cseelye#gmail.com>
RUN apt-get update && apt-get -y install cmake libssl-dev
COPY exampleapp /app
RUN pip install -U -r /app/requirements.txt
I run python container, I want to connect localhost postegresql. And I try some method. But not work. Please talk me. How can I do ? Thanks.
I have run postegresql on port 5432, create datatbase and grant user.
run docker comand
docker run --name=python3 -v ${pwd}:/code -w /code python
python code
import psycopg2
def main():
#Define our connection string
conn_string = "host='localhost' dbname='testDB' user='test' password='test'"
# print the connection string we will use to connect
print ("Connecting to database\n ->{}".format(conn_string))
# get a connection, if a connect cannot be made an exception will be raised here
conn = psycopg2.connect(conn_string)
# conn.cursor will return a cursor object, you can use this cursor to perform queries
cursor = conn.cursor()
print ("Connected!\n")
if __name__ == "__main__":
main()
error message
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
It depends on your host OS and your docker version.
I assume here your database is not running in a container itself, but rather on the host itself (localhost).
For instance, as mentioned in "From inside of a Docker container, how do I connect to the localhost of the machine?", with Docker for Mac v 17.06 and above (June 2017), you can connect to the special Mac-only DNS name docker.for.mac.localhost which will resolve to the internal IP address used by the host.
On Linux directly, you would use the host mode, with an image having ifconfig installed:
docker run --network="host" -id <Docker image ID>
Reading that you're on Windows 10 and running postgresql on the host, I advise you to run postgresql in a container. It makes this way easier.
To connect the python container to the postgres container you'll need a docker network though. Let's call it postgres_backend.
docker network create postgres_backend
You can create the postgresql container with the following command. Just change the /path/to/store/data to a local directory in which you'd like to store the postgres data:
docker run --name postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=test \
-d --restart always \
-v /path/to/store/data:/var/lib/postgresql/data \
--net postgres_backend \
postgres:9.6
Now your postresql container should be up and running :)
To connect your python container to it, you'll have to add a --net postgres_backend to your docker run command and change the host in your script to "postgres" (it's the name we gave the postgres container with --name).
If the python container can't find the host "postgres", try it with the IP shown when entering the command docker exec -ti postgres ip addr show.
to fix this bug:
First, it is normal that it does not work, because postgresql does not run in the same container as the application so the host localhost: 5432 does not exist.
to fix it :
on the properties file isntead localhost:5432 use your IPadresse like IP:5432 and in the pg_hba.conf add this
host all all 0.0.0.0/0 md5
I want to connect to MySQL inside Docker container. I have a running instance of MySQL in a docker container. Since I already have 3306 port busy on my host, I decide to use port 8081 to start my MySQL container. Basically, I started my container with docker run -p 8080:80 -p 8081:3306 --name test test. When I connect to my container, I can connect to my MySQL without error. On the other hand, I have a web app that is able to connect to MySQL with the exact same port (8081) from my host. This means that MySQL is working properly and is reachable from outside. But now in my python script, I cannot connect. I am unable to connect to MySQL with CLI either. It seems like the port number is simply not interpreted. I see that if use for example mysql -P 8081 -u root -p. This is just trying to connect to host MySQL (port 3306) instead of container MySQL on port 8081(when I enter host MySQL credentials, it connect to host MySQL). In my python script, I used this: conn = MySQLdb.connect( host = 'localhost', port=8081, user='root', passwd=''). But this is not working either. In the MySQL man page, I see this :
ยท --port=port_num, -P port_num
The TCP/IP port number to use for the connection.
What am I doing wrong, please?
mysql --version:
mysql Ver 14.14 Distrib 5.7.18, for Linux (x86_64) using EditLine wrapper
update
here is my docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b370c91594d3 test "/bin/sh -c /start.sh" 14 hours ago Up 14 hours 8080/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:8081->3306/tcp test
I can connect to my local Postgres DB in my web app, but NOT if I am running the web app inside of a Docker container.
Web app running inside of a Docker container
Postgres running in the Host machine
I am not sure if it is related to the Postgres connection settings or to the Docker network settings.
Follow my settings and commands:
Host:
OSX 10.11.6
PostgreSQL 9.6
Docker container
Docker 1.13.1
Docker-machine 0.9.0
Docker container OS: python:3.6.0-alpine
Python 3.6 + psycopg2==2.7
postgresql.conf:
listen_addresses = '*'
pg_hba.conf
host all all 127.0.0.1/32 trust
host all all 0.0.0.0/0 trust
host all all 172.17.0.0/24 trust
host all all 192.168.99.0/24 trust
With Docker network in HOST mode
docker run -i --net=host -h 127.0.0.1 -e POSTGRES_URI=postgresql://127.0.0.1:5432/db my/image
Error:
could not connect to server: Connection refused
Is the server running
on host "127.0.0.1" and accepting TCP/IP connections on port 5432?
With Docker network in BRIDGE mode
docker run -i --add-host=dockerhost:`docker-machine ip ${DOCKER_MACHINE}` -e POSTGRES_URI=postgresql://dockerhost:5432/db -p 8000:8000 -p 5432:5432 my/image
Error:
server closed the connection unexpectedly
This probably means the
server terminated abnormally before or while processing the request.
Any ideas?
There is a note about doing this in the docs
I want to connect from a container to a service on the host
The Mac has a changing IP address (or none if you have no network access). Our current recommendation is to attach an unused IP to the lo0 interface on the Mac; for example: sudo ifconfig lo0 alias 10.200.10.1/24, and make sure that your service is listening on this address or 0.0.0.0 (ie not 127.0.0.1). Then containers can connect to this address.
How do you use Fabric to script commands on Vagrant-managed VMs?
I thought it was as simple as this example, but I can't get it to work.
Vagrant by itself is working fine. I can run:
vagrant init
vagrant up --provider=libvirt
vagrant ssh
and connect through ssh just fine. However, using the Fabric example, if I try and run:
fab vagrant uname
it fails to connect with the error:
[127.0.0.1:2222] Executing task 'test_dev_env'
[127.0.0.1:2222] run: uname -a
Fatal error: Low level socket error connecting to host 127.0.0.1 on port 2222: Connection refused (tried 1 time)
Underlying exception:
Connection refused
Aborting.
What is causing this error? As far as I know, vagrant ssh should be running the same ssh command as Fabric. But sure enough, even if I manually run the ssh command:
ssh -i /myproject/.vagrant/machines/default/libvirt/private_key -p 2222 vagrant#127.0.0.1
I also get the error:
ssh: connect to host 127.0.0.1 port 2222: Connection refused
What am I doing wrong?
Apparently, vagrant doesn't actually setup a port-forwarder, so the only way to connect to the VM is to get it's IP from vagrant ssh-config and then connect using that. So the correct vagrant Fabric task looks like:
#task
def vagrant():
result = local('vagrant ssh-config', capture=True)
hostname = re.findall(r'HostName\s+([^\n]+)', result)[0]
port = re.findall(r'Port\s+([^\n]+)', result)[0]
env.hosts = ['%s:%s' % (hostname, port)]
env.user = re.findall(r'User\s+([^\n]+)', result)[0]
env.key_filename = re.findall(r'IdentityFile\s+([^\n]+)', result)[0]
By default Vagrant seems to forward tcp port 22 (ssh) to localhost port 4567.
To listen on port 2222 instead, include this in your Vagrantfile:
config.vm.network "forwarded_port", guest: 22, host: 2222, id: 'ssh'