Access Host redis database from docker conatiner - python

I am trying to connect to host redis database through my docker conatiner.
In my dockerfile , I have redis as a requirement, which gets installed [pip install redis] and image is build using that docker file.
After than I instantiate the conatiner using following command
sudo docker run -p 6543:6543 your_image_name
my app.py is following
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
import redis
def hello(request):
ds_id = '4000'
r_server = redis.Redis(unix_socket_path='/tmp/redis.sock')
result = r_server.set('foo','11')
return Response(result)
Problem is when Redis is installed in the beginning redis.sock file is not generated and thus creating error when I try to connect.
Dockerfile:
FROM centos:latest
# load base packages w/ yum
RUN yum install -y git gcc libffi-devel openssl-devel python-devel postgresql-devel libxml2-devel libxslt-devel
COPY ./requirements.txt .
RUN curl https://bootstrap.pypa.io/get-pip.py >get-pip.py && \
python get-pip.py && \
rm get-pip.py &&\
pip install -r requirements.txt
EXPOSE 6543
WORKDIR /app
COPY app /app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
requirements
pyramid
cornice
pyramid_chameleon
pyramid_beaker
pyramid_redis_sessions
pyRFC3339
oauthlib==0.7.2
oauth2client==1.5.2
pycrypto
PyOpenSSL
pymongo
SQLAlchemy
psycopg2
lxml
gspread
jira
waitress
paste
PasteDeploy
redis
Is there any other way to connect to host redis data.

You need to access the hosts localhost loopback device, so what you do is https://gist.github.com/EugenMayer/3019516e5a3b3a01b6eac88190327e7c#file-gistfile1-txt
You create this file at
/Library/LaunchDaemons/docker.loopback.alias.plist
sudo chown root:staff /Library/LaunchDaemons/docker.loopback.alias.plist
you OSX. Then you restart your mac, after that, you can access the host loopback device using the ip 10.254.254.254 from inside of the container, so from the container you can now configure redist to connect to 10.254.254.254:6543 - which will then use your osx-host redis
Theory:
You need to create this loopback alias, since you cannot use localhost in the container, since this would pick up the loopback device of the container itself - his own loopback device - not the loopback device of your host

If you have a redis instance running on the host machine and just want the container to connect to that redis instance, then you just need to mount the socket file inside the container. For example, if the location of the socket file on the host machine is /var/run/redis/redis.sock and you want the location of the socket file inside the container to be /tmp/redis.sock, then launch the container like this:
sudo docker run \
-p 6543:6543 \
-v /var/run/redis/redis.sock:/tmp/redis.sock \
your_image_name app.py
If you want to setup redis inside the container and run the python app, follow this:
Create a file start.sh with the contents:
#!/bin/bash
/usr/bin/redis-server >/dev/null 2>&1 &
/usr/bin/python $1
Make the following modification to the file app.py:
r_server = redis.StrictRedis(host='localhost', port=6379, db=0)
Make the following modifications to your Dockerfile:
Add the package epel-release
RUN yum install -y git gcc libffi-devel openssl-devel python-devel postgresql-devel libxml2-devel libxslt-devel
epel-release
Install redis
RUN yum install -y redis
Copy the script created above to /app directory:
COPY start.sh /app
RUN chmod +x /app/start.sh
Change entry point to start.sh
ENTRYPOINT [ "/app/start.sh" ]

Related

Can't connect to SQL server from Container

My Python container is throwing the error below when trying to connect to my SQL DB hosted on a server:
mariadb.OperationalError: Can't connect to server on 'xxxxxxxxxxx.jcloud-ver-jpc.ik-server.com' (115)
I am trying to run my container from my server as well. If I run the exact same container from my machine, I can connect to the SQL DB.
I am new to Docker, so just for info, here is my Dockerfile:
FROM python:3.10-slim-buster
WORKDIR /app
COPY alpha_gen alpha_gen
COPY poetry.lock .
COPY pyproject.toml .
# install basic utils
RUN apt-get update
RUN apt-get install curl -y
RUN apt-get install gcc -y
# install MariaDB connector
RUN apt install wget -y
RUN wget https://downloads.mariadb.com/MariaDB/mariadb_repo_setup
RUN chmod +x mariadb_repo_setup
RUN ./mariadb_repo_setup \ --mariadb-server-version="mariadb-10.6"
RUN apt install libmariadb3 libmariadb-dev -y
# install poetry
RUN curl -sSL https://install.python-poetry.org | python3 -
ENV PATH="${PATH}:/root/.local/bin"
RUN poetry config virtualenvs.in-project true
# install dependencies
RUN poetry install
CMD poetry run python alpha_gen/main.py --load_pre_process
Any ideas ?
Problem solved. Apparently there is a private port to use for communication from the server and a public port for communication from outside the server. I was using the public one so it was not working.

Unable to invoke flask server on docker

Dockerfile
FROM python:3.6.8
COPY . /app
WORKDIR /app
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -y
RUN apt install libgl1-mesa-glx -y
RUN apt-get install 'ffmpeg'\
'libsm6'\
'libxext6' -y
RUN pip3 install --upgrade pip
RUN pip3 install opencv-python==4.3.0.38
RUN pip3 install -r requirements.txt
EXPOSE 80
CMD ["python3", "server.py"]
requirements.txt
Flask==0.12
Werkzeug==0.16.1
boto3==1.14.40
torch
torchvision==0.7.0
numpy==1.15.4
sklearn==0.0
scipy==1.2.1
scikit-image==0.14.2
pandas==0.24.2
server.py (Flask Server)
#app.route('/invoke', methods = ['POST'])
def handlePostRequest():
Insert some log statement here
return
if __name__ == '__main__':
app.run(host="0.0.0.0", port=80)
Commands which I run
docker build -t test .
docker run -p 5000:5000 test
Then invoked the Docker container using POST request in postman on 0.0.0.0/invoke but I am getting Error: connect ECONNREFUSED 0.0.0.0:80
Please let me know what wrong did I do here?
It looks like you are binding to all interfaces with app.run(host="0.0.0.0", port=80)
And it looks like you are mapping host port 5000 to container port 5000
-p 5000:5000
If your process is listening on port 80, you should change the mapping to be
-p 5000:80
And then get the IP address of the host where your container is running, and you should be able to:
curl <ip>:5000
And that will send traffic to your process in your container listening on port 80

Correct way for deploying dbt with docker and cloud run

I'm trying to deploy dbt on a Google cloud run service with a docker container. following david vasquez and dbt Docker images However when trying to deploy the builded image to cloud run. I'm getting an error.
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
This is my dockerfile
FROM python:3.8.1-slim-buster
RUN apt-get update && apt-get dist-upgrade -y && apt-get install -y --no-install-recommends git software-properties-common make build-essential ca-certificates libpq-dev && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY requirements/requirements.0.17.0rc4.txt ./requirements.0.17.0rc4.txt
RUN pip install --upgrade pip setuptools
RUN pip install -U pip
RUN pip install dbt==0.17.0
RUN pip install --requirement ./requirements.0.17.0rc4.txt
RUN useradd -mU dbt_user
ENV PYTHONIOENCODING=utf-8
ENV LANG C.UTF-8
ENV PORT = 8080
ENV HOST = 0.0.0.0
WORKDIR /usr/app
VOLUME /usr/app
USER dbt_user
CMD ['dbt', 'run']
I understand the health check fails because it can't find a port to listen to, except i specify one in my ENV
Can anyone help me with a solution? thx in advance
According to the documentation one of the requirements to deploy an application on Cloud Run is to listen requests on 0.0.0.0 and expose a port:
The container must listen for requests on 0.0.0.0 on the port to which requests are sent. By default, requests are sent to 8080, but you can configure Cloud Run to send requests to the port of your choice.
dbt is a command line tool which means it doesn't expose any PORT, hence when you're trying to deploy Cloud Run and it verifies if the build is listening it fails with the mentioned error.

How do I link a dockerized python script to a mysql docker container to load data on the same host?

I have two docker containers running in an Ubuntu 16.04 machine, one docker container has a mysql sever running, the other container holds a dockerized python script set to run a cron job every minute that loads data into mysql. How can I connect the two to load data through the python script into the mysql container? I have an error showing up:
Here are my relevant commands:
MYSQL container runs without issue:
docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=yourPassword --name icarus -d mysql_docker_image
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
927e50ca0c7d mysql_docker_image "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp, 33060/tcp icarus
Second container holds cron and python script:
#build the container without issue
sudo docker run -t -i -d docker-cron
#exec into it to check logs
sudo docker exec -i -t container_id /bin/bash
#check logs
root#b149b5e7306d:/# cat /var/log/cron.log
Error:
have the following error showing up, which I believe has to do with wrong host address:
Caught this error: OperationalError('(pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'localhost\' ([Errno 99] Cannot assign requested address)")',)
Python Script:
from traffic.data import opensky
from sqlalchemy import create_engine
#from sqlalchemy_utils import database_exists, create_database
import sqlalchemy
import gc
#connection and host information
host = 'localhost'
db='icarus'
engine = create_engine('mysql+pymysql://root:password#'+ host+ ':3306/'+ db) #create engine connection
version= sys.version_info[0]
#functions to upload data
def upload(df,table_name):
df.to_sql(table_name,con=engine,index=False,if_exists='append')
engine.dispose()
print('SUCCESSFULLY LOADED DATA INTO STAGING...')
#pull data drom api
sv = opensky.api_states()
final_df = sv.data
#quick column clean up
print(final_df.head())
final_df=final_df.rename(columns = {'timestamp':'time_stamp'})
#insert data to staging
try:
upload(final_df, 'flights_stg')
except Exception as error:
print('Caught this error: ' + repr(error))
del(final_df)
gc.collect()
I'm assuming the error is the use of 'localhost' as my address? How would i go about resolving something like this?
More information:
MYSQL Dockerfile:
FROM mysql
COPY init.sql /docker-entrypoint-initdb.d
Python Dockerfile:
FROM ubuntu:latest
WORKDIR /usr/src/app
#apt-get install -y build-essential -y python python-dev python-pip python-virtualenv libmysqlclient-dev curl&& \
RUN \
apt-get update && \
apt-get install -y build-essential -y git -y python3.6 python3-pip libproj-dev proj-data proj-bin libgeos++-dev libmysqlclient-dev python-mysqldb curl&& \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt ./
RUN pip3 install --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt
RUN pip3 install --upgrade setuptools
RUN pip3 install git+https://github.com/xoolive/traffic
COPY . .
# Install cron
RUN apt-get update
RUN apt-get install cron
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/simple-cron
# Add shell script and grant execution rights
ADD script.sh /script.sh
RUN chmod +x /script.sh
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/simple-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
Can you share your dockerfile or compose file from MySQL container. Yes, the problem related to using localhost as host. You must use a docker service name as host. So in docker service name works as DNS. For example if your docker-compose looks like:
services:
mydb:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD:root
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: root
You must use mydb instead of localhost
Looks like a user defined network bridge was the way to go here as recommended. Solved issue by:
docker network create my-net
docker create --name mysql \
--network my-net \
--publish 3306:3306 \
-e MYSQL_ROOT_PASSWORD=password \
mysql:latest
docker create --name docker-cron \
--network my-net \
docker-cron:latest
docker start each of them and using the --name as the host was perfect.

Port mapping in Docker

I created a docker for a sample python pyramid app. My dockerfile is this:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python-pip python-dev curl && \
pip install --upgrade pip setuptools
WORKDIR /app
COPY . /app
EXPOSE 6543
RUN pip install -e .
ENTRYPOINT [ "pserve" ]
CMD [ "development.ini" ]
My build command is this:
docker build -t pyramid_app:latest .
My run command is this:
docker run -d -p 6543:6543 pyramid_app
When i try to access http://localhost:6543 I get an error
Failed to load resource: net::ERR_SOCKET_NOT_CONNECTED
When I curl inside the machine it works fine.
It would be great if someone could help me figure out why my port mapping isn't working.
Thanks.
in your pserve config, change
[server:main]
listen = 127.0.0.1:6543
to
[server:main]
listen = *:6543
otherwise the web server will only accept connections from the docker container itself

Categories

Resources