Connect to mssql in docker container - python

Can't connect to mssql in docker container Code piece for connecting mssql in python:
def connectsql():
engine = sqlalchemy.create_engine(
"mssql+pymssql://service name")
ms_sql_conn = engine.connect()
df = pd.read_sql('select * from table name',
ms_sql_conn,
parse_dates=["rest_date"])
ms_sql_conn.close()
return df
When I run the script, the connection is successful, but when I try to put this code in docker, there is no connection. As I understand it, I need to write something in the environment in the docker-compose file, but I don’t understand what exactly and do I need python code for this? Dockerfile contents:
FROM python:3
RUN pip install --upgrade pip --default-timeout=100 future
WORKDIR /check
COPY . /check
RUN pip install -r requirements.txt
CMD [ "python", "/check/bot2.py" ]
docker-compose content:
version: '3.1'
services:
bot2:
image: first
build: ./
restart: always
I tried to register a connection in the environment, but it didn’t work out and I also don’t know if I need to change the python code then?

In your docker compose file you need add two line:
extra_hosts:
- "host.docker.internal:host-gateway"
version: '3.1'
services:
bot2:
image: first
build: ./
restart: always
extra_hosts:
- "host.docker.internal:host-gateway"

Related

How to run a command in docker-compose after a service run?

I have searched but I couldn't find a solution for my problem. My docker-compose.yml file as below.
#
version: '2.1'
services:
mongo:
image: mongo_db
build: mongo_image
container_name: my_mongodb
restart: always
networks:
- isolated_network
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
entrypoint: ["python3", "/tmp/script/get_api_to_mongodb.py", "&"]
networks:
isolated_network:
So here I use a custom Dockerfile. And my Dockerfile is like below.
FROM mongo:latest
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN pip3 install requests
RUN pip3 install pymongo
RUN apt-get clean -y
RUN mkdir -p /tmp/script
COPY get_api_to_mongodb.py /tmp/script/get_api_to_mongodb.py
#CMD ["python3","/tmp/script/get_api_to_mongodb.py","&"]
Here I want to create a container which have MongoDB and after create the container I collect a data using an API and send the data to MongoDB. But when I run the python script in that time mongodb is not initialized. So I need to run my script after container is created and right after mongodb initialized. Thanks in advance.
You should run this script as a separate container. It's not "part of the database", like an extension or plugin, but rather an ordinary client process that happens to connect to the database and that you want to run relatively early on. In general, if you're thinking about trying to launch a background process in a container, it's often a better approach to run foreground processes in two separate containers.
This setup means you can use a simpler Dockerfile that starts from an image with Python preinstalled:
FROM python:3.10
RUN pip install requests pymongo
WORKDIR /app
COPY get_api_to_mongodb.py .
CMD ["./get_api_to_mongodb.py"]
Then in your Compose setup, declare this as a second container alongside the first one. Since the script is in its own image, you can use the unmodified mongo image.
version: '2.4'
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
loader:
build: .
restart: on-failure
depends_on:
- mongodb
# environment:
# - MONGO_HOST=mongo
# - MONGO_USERNAME=root
# - MONGO_PASSWORD=root_pw
Note that the loader will re-run every time you run docker-compose up -d. You also may have to wait for the database to do its initialization before you can run the loader process; see Docker Compose wait for container X before starting Y.
It's likely you have an existing Compose service for your real application
version: '2.4'
services:
mongo: { ... }
app:
build: .
...
If that image contains the loader script, then you can docker-compose run it. This launches a new temporary container, using most of the attributes from the Compose service declaration, but you provide an alternate command: and the ports: are ignored.
docker-compose run app ./get_api_to_mongodb.py
One might ideally like a workflow where first the database container starts; then once it's accepting requests, run the loader script as a temporary container; then once that's completed start the main application server. This is mostly beyond Compose's capabilities, though you can probably get close with a combination of extended depends_on: declarations and a healthcheck: for the database.

Docker Compose & Postgres Connection Refused

I know this question has been asked a million times, and I've read as many of the answers as I can find. They all seem to come to one conclusion (db hostname is the container service name).
I got it to work in my actual code base, but it started failing when I added ffmpeg install to the Dockerfile. Nothing else had to be done, just installing FFPMEG via apt-get install -y ffmpeg would cause my python code to get the connection refused message. If I removed the ffmpeg install line, then my code would connect to the db just fine. Although re-running the container would trigger the dreaded connection refused error.
So I created a quick sample app so I could post here and try to get some thoughts on what's going on. But now this sample code won't connect to the db no matter what I do.
So here goes - And thanks in advance for any help:
myapp.py
# import ffmpeg
import psycopg2
if __name__ == "__main__":
print("Starting app...")
# probe = ffmpeg.probe("131698249.mp4")
# print(probe)
try:
connection = psycopg2.connect(
user="docker", password="docker", host="db", port="5432", database="docker")
cursor = connection.cursor()
postgreSQL_select_Query = "select * from test_table"
cursor.execute(postgreSQL_select_Query)
print("Selecting rows from table using cursor.fetchall")
records = cursor.fetchall()
print("Print each row and it's columns values")
for row in records:
print(row)
cursor.close()
connection.close()
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
Dockerfile
WORKDIR /usr/src/app
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD ["python", "myapp.py"]
docker-compose.yml
version: '3.8'
services:
db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "8000:5432"
expose:
- "5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- pg_data:/var/lib/postgresql/data
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
volumes:
pg_data:
If I build and run the code: docker compose up --detach
Everything gets built and started. The Database starts up and gets populated with table/data from the init.sql (not included here)
The app container starts and the code executes, but immediately fails with the Connection refused error.
However, if from my computer I run: psql -U docker -h localhost -p 8000 -d docker
it connects without any error and I can query the database as expected.
But the app in the container won't connect and if I run the container with docker run -it myapp /bin/bash and then from inside the container run: psql -U docker -h db -p 5432 -d docker I get the Connection refused error.
If anyone has any thoughts or ideas I would be so grateful. I've been wrestling with this for three days now.
Looks like I've resolved it. I was sure I'd tried this before, but regardless adding a networks section to the docker-compose.yml seemed to fix the issue.
I also had to do the second docker-compose up -d as suggested by David Maze's comment. But the combination of the two seem to have fixed my issue.
Here's my updated docker-compose.yml for complete clarity:
version: '3.8'
services:
postgres-db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "5500:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- dock-db-test
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
networks:
- dock-db-test
networks:
dock-db-test:
external: false
name: dock-db-test

How to dump and restore correctly a postgresql db from docker

I stuck with this error when trying to backup and restore my database from a docker django app environment :
I first did this command to backup my whole DB
docker exec -t project_final-db-1 pg_dumpall -c -U fred2020 > ./db/dump.sql
And then trying to restory with this command
cat dump.sql | docker exec -i --user fred2020 catsitting-db-1 psql -U fred2020 -d postgres
I have two containers, one for my django app named catsitting-web-1 and one for my postgresql named catsitting-db-1.
I don't understand why it gaves me that error, my db user is the same that I specified on the Dockerfile.
Any clue ?
For purpose help, here is my docker files configuration :
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
RUN pip install Pillow
COPY . /code/
docker-compose.yml
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=fred2020
- POSTGRES_PASSWORD=p*******DD
expose:
- "5432"
ports:
- 5432:5432
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
Django>=3.0,<4.0
psycopg2-binary>=2.8
Pillow==8.1.0
And that's my process to migrate from laptop1 to laptop2 :
Installation
Run a command line go into a root directory and run:
git clone https://github.com/XXXXXXXXXXXXXXXX
In the command line go into the root directory:
cd catsitting
In the same command line window, run:
docker-compose build --no-cache
In the command line window you need first to migrate the database for Django, run :
docker-compose run web python manage.py migrate
In the command line window then you need to apply the migrations, run :
docker-compose run web python manage.py makemigrations
In the command line window then you need to import database, run :
cat dump.sql | docker exec -i --user fred2020 catsitting-db-1 psql -U fred2020 -d postgres
(for dumping my DB I used docker exec -t project_final-db-1 pg_dumpall -c -U fred2020 > ./db/dump.sql)
You can now run:
docker-compose up
Is there something I get wrong ?
I solved !
It was a problem in misconfiguration in the pg_hba.conf inside my docker postgresql
I changed the value from scram-sha-256 to md5 and it works now I can display my webapp with the current db !!
Do you know how to specifie md5 when I build my docker environnement ? by default it puts scram-sha-256
Do you know why when I restore my dump in the new environnement by default in the container the pg_hba.conf set the authentification methode to scram-sha-256 and to do my connection working I need to edit that file and to put the authentification method set to md5 ?
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
Ok sorry folks I found the solution.
I've put that line in my docker-compose.yml:
environment:
- POSTGRES_HOST_AUTH_METHOD=trust

testing.postgresql command not found: initdb inside docker

Hi i'm trying to make a unittest with postgresql database that use sqlalchemy and alembic
Also im running it on docker postgresql
I'm following the docs of testing.postgresql(docs) to set up a temporary postgresql instance handle database testing and wrote the the following test:
def test_crawl_bds_obj(self):
with testing.postgresql.Postgresql() as postgresql:
engine.create_engine(postgresql.url())
result = crawl_bds_obj(1 ,'/ban-can-ho-chung-cu-duong-mai-chi-tho-phuong-an-phu-prj-the-sun-avenue/nh-chu-gap-mat-tien-t-q2-thuoc-du-tphcm-pr22171626')
self.assertEqual(result, 'sell')
The crawl_bds_obj() basically save information from an URL and save it to the database using session.commit() and return the type data
When i tried to run the test it return the following error:
ERROR: test_crawl_bds_obj (tests.test_utils.TestUtils)
raise RuntimeError("command not found: %s" % name)
RuntimeError: command not found: initdb
In the docs it said that "testing.postgresql.Postgresql executes initdb and postgres on instantiation. On deleting Postgresql object, it terminates PostgreSQL instance and removes temporary directory."
So why am i getting initdb error when i already installed testing.postgresql and had postgresql running on my docker?
EDIT:
I aslo had set my data path but it still return the same error
dockerfile:
FROM python:slim-jessie
ADD requirements.txt /app/requirements.txt
ADD . /app/
WORKDIR /app/
RUN pip install -r requirements.txt
docker-compose:
postgres:
image: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_DEFAULT_USER}
- POSTGRES_PASSWORD=${POSTGRES_DEFAULT_PASSWORD}
- POSTGRES_DB=${POSTGRES_DEFAULT_DB}
- POSTGRES_PORT=${POSTGRES_DEFAULT_PORT}
volumes:
- ./data/postgres:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
volumes:
- ./data/pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT}:80"
logging:
driver: none
restart: unless-stopped
worker:
build:
context: .
dockerfile: Dockerfile
command: "watchmedo auto-restart --recursive -p '*.py'"
environment:
- C_FORCE_ROOT=1
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
- postgres
testing.postgresql.Postgresql(copy_data_from='data/postgres:/var/lib/postgresql/data')
you need to run this command as postgresql user not root, so you may try to run your commands using:
runuser -l postgres -c 'command'
or
su -c "command" postgres
or add USER postgres to your Dockerfile
and check the requirments:
Python 2.6, 2.7, 3.2, 3.3, 3.4, 3.5
pg8000 1.10
UPDATE
To make copy_data_from works you should generate the folder first:
FROM python:slim-jessie
ADD requirements.txt /app/requirements.txt
ADD . /app/
WORKDIR /app/
RUN pip install -r requirements.txt
RUN /PATH/TO/initdb -D myData -U postgres
and then add this:
pg = testing.postgresql.Postgresql(copy_data_from='myData')

Run a docker container from an existing container using docker-py

I have a Docker container which runs a Flask application. When Flask receives and http request, I would like to trigger the execution of a new ephemeral Docker container which shutdowns once it completes what it has to do.
I have read Docker-in-Docker should be avoided so this new container should be run as a sibling container on my host and not within the Flask container.
What would be the solution to do this with docker-py?
we are doing stuff like this by mounting docker.sock as shared volume between the host machine and the container. This allows the container sending commands to the machine such as docker run
this is an example from our CI system:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Answering my own question. Here is a complete setup which works.
In one folder, create the following files:
requirements.txt
Dockerfile
docker-compose.yml
api.py
requirements.txt
docker==3.5.0
flask==1.0.2
Dockerfile
FROM python:3.7-alpine3.7
# Project files
ARG PROJECT_DIR=/srv/api
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY requirements.txt ./
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
docker-compose.yml
Make sure to mount docker.sock in volumes as mentioned in the previous answer above.
version: '3'
services:
api:
container_name: test
restart: always
image: test
build:
context: ./
volumes:
- ./:/srv/api/
- /var/run/docker.sock:/var/run/docker.sock
environment:
FLASK_APP: api.py
command: ["flask", "run", "--host=0.0.0.0"]
ports:
- 5000:5000
api.py
from flask import Flask
import docker
app = Flask(__name__)
#app.route("/")
def hello():
client = docker.from_env()
client.containers.run('alpine', 'echo hello world', detach=True, remove=True)
return "Hello World!"
Then open your browser and navigate to http://0.0.0.0:5000/
It will trigger the execution of the alpine container. If you don't already have the alpine image, it will take a bit of time the first time because Docker will automatically download the image.
The arguments detach=True allows to execute the container asynchronously so that Flask does not wait for the end of the process before returning its response.
The argument remove=True indicates Docker to remove the container once its execution is completed.

Categories

Resources