testing.postgresql command not found: initdb inside docker - python

Hi i'm trying to make a unittest with postgresql database that use sqlalchemy and alembic
Also im running it on docker postgresql
I'm following the docs of testing.postgresql(docs) to set up a temporary postgresql instance handle database testing and wrote the the following test:
def test_crawl_bds_obj(self):
with testing.postgresql.Postgresql() as postgresql:
engine.create_engine(postgresql.url())
result = crawl_bds_obj(1 ,'/ban-can-ho-chung-cu-duong-mai-chi-tho-phuong-an-phu-prj-the-sun-avenue/nh-chu-gap-mat-tien-t-q2-thuoc-du-tphcm-pr22171626')
self.assertEqual(result, 'sell')
The crawl_bds_obj() basically save information from an URL and save it to the database using session.commit() and return the type data
When i tried to run the test it return the following error:
ERROR: test_crawl_bds_obj (tests.test_utils.TestUtils)
raise RuntimeError("command not found: %s" % name)
RuntimeError: command not found: initdb
In the docs it said that "testing.postgresql.Postgresql executes initdb and postgres on instantiation. On deleting Postgresql object, it terminates PostgreSQL instance and removes temporary directory."
So why am i getting initdb error when i already installed testing.postgresql and had postgresql running on my docker?
EDIT:
I aslo had set my data path but it still return the same error
dockerfile:
FROM python:slim-jessie
ADD requirements.txt /app/requirements.txt
ADD . /app/
WORKDIR /app/
RUN pip install -r requirements.txt
docker-compose:
postgres:
image: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_DEFAULT_USER}
- POSTGRES_PASSWORD=${POSTGRES_DEFAULT_PASSWORD}
- POSTGRES_DB=${POSTGRES_DEFAULT_DB}
- POSTGRES_PORT=${POSTGRES_DEFAULT_PORT}
volumes:
- ./data/postgres:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
volumes:
- ./data/pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT}:80"
logging:
driver: none
restart: unless-stopped
worker:
build:
context: .
dockerfile: Dockerfile
command: "watchmedo auto-restart --recursive -p '*.py'"
environment:
- C_FORCE_ROOT=1
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
- postgres
testing.postgresql.Postgresql(copy_data_from='data/postgres:/var/lib/postgresql/data')

you need to run this command as postgresql user not root, so you may try to run your commands using:
runuser -l postgres -c 'command'
or
su -c "command" postgres
or add USER postgres to your Dockerfile
and check the requirments:
Python 2.6, 2.7, 3.2, 3.3, 3.4, 3.5
pg8000 1.10
UPDATE
To make copy_data_from works you should generate the folder first:
FROM python:slim-jessie
ADD requirements.txt /app/requirements.txt
ADD . /app/
WORKDIR /app/
RUN pip install -r requirements.txt
RUN /PATH/TO/initdb -D myData -U postgres
and then add this:
pg = testing.postgresql.Postgresql(copy_data_from='myData')

Related

Connect to mssql in docker container

Can't connect to mssql in docker container Code piece for connecting mssql in python:
def connectsql():
engine = sqlalchemy.create_engine(
"mssql+pymssql://service name")
ms_sql_conn = engine.connect()
df = pd.read_sql('select * from table name',
ms_sql_conn,
parse_dates=["rest_date"])
ms_sql_conn.close()
return df
When I run the script, the connection is successful, but when I try to put this code in docker, there is no connection. As I understand it, I need to write something in the environment in the docker-compose file, but I don’t understand what exactly and do I need python code for this? Dockerfile contents:
FROM python:3
RUN pip install --upgrade pip --default-timeout=100 future
WORKDIR /check
COPY . /check
RUN pip install -r requirements.txt
CMD [ "python", "/check/bot2.py" ]
docker-compose content:
version: '3.1'
services:
bot2:
image: first
build: ./
restart: always
I tried to register a connection in the environment, but it didn’t work out and I also don’t know if I need to change the python code then?
In your docker compose file you need add two line:
extra_hosts:
- "host.docker.internal:host-gateway"
version: '3.1'
services:
bot2:
image: first
build: ./
restart: always
extra_hosts:
- "host.docker.internal:host-gateway"

how can I save docker's database data locally on my server?

I'm running an app inside a docker container. That app uses docker Postgres image to save data in a database. I need to keep a local copy of this database's data to avoid losing data if the container is removed or purged somehow ..so I am using volumes inside my `docker-compose.YAML file,, but still the local DB folder is always empty .. so whenever I move the container or purge it the data are lost
docker-compose.yaml
version: "2"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- '5433:5432'
restart: always
command: -p 5433
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
django-apache2:
build: .
container_name: rolla_django
restart: always
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
ports:
- '4002:80'
- '4003:443'
volumes:
- ./www/:/var/www/html
- ./www/demo_app/static_files:/var/www/html/demo_app/static_files
- ./www/demo_app/media:/var/www/html/demo_app/media
# command: sh -c 'python manage.py migrate && python manage.py loaddata db_backkup.json && apache2ctl -D FOREGROUND'
command: sh -c 'wait-for-it db:5433 -- python manage.py migrate && apache2ctl -D FOREGROUND'
depends_on:
- db
as you can see i used ./data/db:/var/lib/postgresql/data , but locally the ./data/db directory is always empty !!
NOTE
when I use docker volume list it shows no volumes at all
According to your setup, the data is in /tmp: PGDATA=/tmp. Remove this and you volume mapping should work.
Also your command -p 5433 makes postgres run on port 5433, but you still map the port 5432. So if you cant reach the database it might be because of that.

How to dump and restore correctly a postgresql db from docker

I stuck with this error when trying to backup and restore my database from a docker django app environment :
I first did this command to backup my whole DB
docker exec -t project_final-db-1 pg_dumpall -c -U fred2020 > ./db/dump.sql
And then trying to restory with this command
cat dump.sql | docker exec -i --user fred2020 catsitting-db-1 psql -U fred2020 -d postgres
I have two containers, one for my django app named catsitting-web-1 and one for my postgresql named catsitting-db-1.
I don't understand why it gaves me that error, my db user is the same that I specified on the Dockerfile.
Any clue ?
For purpose help, here is my docker files configuration :
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
RUN pip install Pillow
COPY . /code/
docker-compose.yml
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=fred2020
- POSTGRES_PASSWORD=p*******DD
expose:
- "5432"
ports:
- 5432:5432
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
Django>=3.0,<4.0
psycopg2-binary>=2.8
Pillow==8.1.0
And that's my process to migrate from laptop1 to laptop2 :
Installation
Run a command line go into a root directory and run:
git clone https://github.com/XXXXXXXXXXXXXXXX
In the command line go into the root directory:
cd catsitting
In the same command line window, run:
docker-compose build --no-cache
In the command line window you need first to migrate the database for Django, run :
docker-compose run web python manage.py migrate
In the command line window then you need to apply the migrations, run :
docker-compose run web python manage.py makemigrations
In the command line window then you need to import database, run :
cat dump.sql | docker exec -i --user fred2020 catsitting-db-1 psql -U fred2020 -d postgres
(for dumping my DB I used docker exec -t project_final-db-1 pg_dumpall -c -U fred2020 > ./db/dump.sql)
You can now run:
docker-compose up
Is there something I get wrong ?
I solved !
It was a problem in misconfiguration in the pg_hba.conf inside my docker postgresql
I changed the value from scram-sha-256 to md5 and it works now I can display my webapp with the current db !!
Do you know how to specifie md5 when I build my docker environnement ? by default it puts scram-sha-256
Do you know why when I restore my dump in the new environnement by default in the container the pg_hba.conf set the authentification methode to scram-sha-256 and to do my connection working I need to edit that file and to put the authentification method set to md5 ?
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
Ok sorry folks I found the solution.
I've put that line in my docker-compose.yml:
environment:
- POSTGRES_HOST_AUTH_METHOD=trust

Docker compose executable file not found in $PATH": unknown

but I'm having a problem.
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 0
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
compose.yml :
version: '3'
services:
db:
image: postgres
volumes:
- ./docker/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=sampledb
- POSTGRES_USER=sampleuser
- POSTGRES_PASSWORD=samplesecret
- POSTGRES_INITDB_ARGS=--encoding=UTF-8
django:
build: .
environment:
- DJANGO_DEBUG=True
- DJANGO_DB_HOST=db
- DJANGO_DB_PORT=5432
- DJANGO_DB_NAME=sampledb
- DJANGO_DB_USERNAME=sampleuser
- DJANGO_DB_PASSWORD=samplesecret
- DJANGO_SECRET_KEY=dev_secret_key
ports:
- "8000:8000"
command:
- python3 manage.py runserver
volumes:
- .:/code
error :
ERROR: for django Cannot start service django: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"python3 manage.py runserver\": executable file not found in $PATH": unknown
At first, I thought Python Manage was wrong.
But i tried command ls , To my surprise, I succeeded.
Then I tried the ls -al command, but it failed.
I think the addition of a command to write space is causing a problem.
how can i fix it ?
When you use list syntax in the docker-compose.yml file, each item is taken as a word. You're running the shell equivalent of
'python3 manage.py runserver'
You can either break this up into separate words yourself
command:
- python3
- manage.py
- runserver
or have Docker Compose do it for you
command: python3 manage.py runserver
In general fixed properties of the image like this should be specified in the Dockerfile, not in the docker-compose.yml. Every time you run this image you're going to want to run this same command, and you're going to want to run the code built into the image. There are two syntaxes, with the same basic difference:
# Explicitly write out the words
CMD ["python3", "manage.py", "runserver"]
# Docker wraps in sh -c '...' which splits words for you
CMD python3 manage.py runserver
With the code built into the image and a reasonable default command defined there, you can delete the volumes: and command: from your docker-compose.yml file.

django docker-compose deleting data from mongo database when i am doing "docker-compose down" and again"up"

DockerFile:
FROM python:3.6
WORKDIR /usr/src/jobsterapi
COPY ./ ./
RUN pip install -r requirements.txt
CMD ["/bin/bash"]
docker-compose.yml
version: "3"
services:
jobster_api:
container_name: jobster
build: ./
# command: python manage.py runserver 0.0.0.0:8000
command: "bash -c 'python src/manage.py makemigrations --no-input && python src/manage.py migrate --no-input && python src/manage.py runserver 0.0.0.0:8000'"
working_dir: /usr/src/jobster_api
environment:
REDIS_URI: redis://redis:6379
MONGO_URI: mongodb://jobster:27017
ports:
- "8000:8000"
volumes:
- ./:/usr/src/jobster_api
links:
- redis
- elasticsearch
- mongo
#redis
redis:
image: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- "6379:6379"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
ports:
- "9200:9200"
- "9300:9300"
mongo:
image: mongo
ports:
- "27017:27017"
I have done setup django with mongodb inside docker using following docker-compose
command. it is working fine every thing. but when i am adding any records
using "docker exec -it 'img id' /bin/bash" it is inserting data(i tried creating superuser for django
admin panel). but, when i am again making it "docker-compose up" after "docker-compose down" it is
deleting all data from database showing empty records. so i am not able to access admin panel also for next timeself.
Please have a look.........
Add a volumes to
mongo:
image: mongo
ports:
- "27017:27017"
volumes:
- insert_mongos_stored_area
https://docs.docker.com/storage/volumes/

Categories

Resources