I'm using docker, selenium, and Django.
I just realised i was doing my tests on my production database ; while i wanted to test on StaticLiveServerTestCase self-generated database.
I tried to follow that tutorial
#override_settings(ALLOWED_HOSTS=['*'])
class BaseTestCase(StaticLiveServerTestCase):
host = '0.0.0.0'
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.host = socket.gethostbyname(socket.gethostname())
cls.selenium = webdriver.Remote(
command_executor='http://hub:4444/wd/hub',
desired_capabilities=DesiredCapabilities.CHROME,
)
cls.selenium.implicitly_wait(5)
#classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
class MyTest(BaseTestCase):
def test_simple(self):
self.selenium.get(self.live_server_url)
I've no error trying to connect to the chrome-hub, but when i try to print my page_source, i'm not on my django app but on a chrome error message. Here is a part :
<div class="error-code" jscontent="errorCode" jstcache="7">ERR_CONNECTION_REFUSED</div>
I'm using docker-compose 1. Selenium.yml:
chrome:
image: selenium/node-chrome:3.11.0-dysprosium
volumes:
- /dev/shm:/dev/shm
links:
- hub
environment:
HUB_HOST: hub
HUB_PORT: '4444'
hub:
image: selenium/hub:3.11.0-dysprosium
ports:
- "4444:4444"
expose:
- "4444"
app:
links:
- hub
I guess i did something wrong in my docker-compose file, but i don't manage to figure out what.
Thanks in advance !
PS : live_server_url = http://localhost:8081
You need to put container_name of the container that is running Django/the tests as host when using docker-compose, i.e.
host = 'app'
For a more detailed discussion see this question
Related
I am trying to connect to MySql DB using a python script ingested via docker. I have the following compose file:
version: '3.9'
services:
mysql_db:
image: mysql:latest
restart: unless-stopped
environment:
MYSQL_DATABASE: ${MY_SQL_DATABASE}
MYSQL_USER: ${MY_SQL_USER}
MYSQL_PASSWORD: ${MY_SQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MY_SQL_ROOT_PASSWORD}
ports:
- '3306:3306'
volumes:
- ./mysql-data:/var/lib/mysql
adminer:
image: adminer:latest
restart: unless-stopped
ports:
- 8080:8080
ingestion-python:
build:
context: .
dockerfile: ingestion.dockerfile
depends_on:
- mysql_db
Adminer connects to MySql with success. Then I created the following ingestion script to automate a criação de uma tabela. My ingestion script is:
from dotenv import load_dotenv
import os
import pandas as pd
from sqlalchemy import create_engine
def main():
load_dotenv('.env')
user = os.environ.get('MY_SQL_USER')
password = os.environ.get('MY_SQL_PASSWORD')
host = os.environ.get('MY_SQL_HOST')
port = os.environ.get('MY_SQL_PORT')
db = os.environ.get('MY_SQL_DATABASE')
table_name = os.environ.get('MY_SQL_TABLE_NAME')
print(f'mysql+pymysql://{user}:{password}#{host}:{port}/{db}')
engine = create_engine(f'mysql+pymysql://{user}:{password}#{host}:{port}/{db}')
df = pd.read_csv('./data/data.parquet', encoding='ISO-8859-1', on_bad_lines='skip', engine='python')
df.to_sql(name=table_name, con=engine, if_exists='append')
if __name__ == '__main__':
main()
When I run my docker compose (docker-compose up -d) file I get:
2023-02-14 08:58:59 sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql_db' ([Errno 111] Connection refused)")
2023-02-14 08:58:59 (Background on this error at: https://sqlalche.me/e/20/e3q8)
The credentials and connections are retrieved from my .env file:
#MYSQL CONFIG
MY_SQL_DATABASE = test_db
MY_SQL_USER = data
MY_SQL_PASSWORD = random
MY_SQL_ROOT_PASSWORD = root
#PYTHON INGESTION
MY_SQL_HOST = mysql_db
MY_SQL_PORT = 3306
MY_SQL_TABLE_NAME = test_table
Why I can't connect to MySql DB using my python script?
This is most likely a timing problem - your ingestion container is starting before the database in the mysql container is ready. The depends_on only waits for the start of the mysql container, not on the database actually being ready to accept connections.
You might want to check the log outputs from the containers to see when the database is actually ready to accept connections, and include some delay into the ingestion container. Another option would be to try opening the connection in a loop with enough retries and some timeout between retries so that you can start as soon as the database is ready.
You should set the hostname in your docker compose file:
mysql_db:
hostname: "mysql_db"
image: mysql:latest
restart: unless-stopped
environment:
MYSQL_DATABASE: ${MY_SQL_DATABASE}
MYSQL_USER: ${MY_SQL_USER}
MYSQL_PASSWORD: ${MY_SQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MY_SQL_ROOT_PASSWORD}
ports:
- '3306:3306'
volumes:
- ./mysql-data:/var/lib/mysql
But as fallback you can also try the default hostname:port exposed in docker as connection string since you don't have a network set up:
MY_SQL_HOST = host.docker.internal
MY_SQL_PORT = 3306
MY_SQL_TABLE_NAME = test_table
I'm trying to make a queue system using celery+sqs.
Still in my local environment with localstack I'm not able to receive messages in worker. It just doesn't show anything. There is a question some time ago, but I'm ok in their config.
I'm using all other SQS/SNS activities from other function, but isn't working from celery.
My current setup is like this:
Docker config:
services:
localstack:
image: localstack/localstack:latest
environment:
- SERVICES=sqs,sns
- HOSTNAME=localstack
- HOSTNAME_EXTERNAL=localstack
ports:
- '4566:4566'
networks:
- platform_default
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
And the celery instantiation is down, used a get_queue directly to be sure of its link.
return Celery(
"server",
task_default_queue=config.sqs_celery.queue_name,
broker=f"sqs://",
broker_url=f"sqs://{config.sqs_celery.aws_access_key_id}:{config.sqs_celery.aws_secret_access_key}#{config.sqs_celery.broker_site_port}",
# in my case: sqs://localstack:localstack#localhost:4566
broker_transport_options={
'region': config.sqs_celery.region,
"predefined_queues": {
config.sqs_celery.queue_name: {
"url": get_queue_url(config.sqs_celery.queue_name),
# in my case: http://localstack:4566/000000000000/tasks
'region': config.sqs_celery.region,
}
}
}
)
Please maybe you have some ideas to start, because I lost half of the day trying to figure out what is wrong.
SOLVED:
The queue url can't be solved, you have to check everything. I set it to "localhost" and it worked.
I'm creating a basic project to test Flask + Celery + RabbitMQ + Docker.
For some reason, that I do not know, when I call the celery, the task seems to call RabbitMQ, but it stays at the PENDING state always, it never changes to another state. I try to use task.get(), but the code freezes. Example:
The celery worker (e.g. worker_a.py) is something like this:
from celery import Celery
# Initialize Celery
celery = Celery('worker_a',
broker='amqp://guest:guest#tfcd_rabbit:5672//',
backend='rpc://')
[...]
#celery.task()
def add_nums(a, b):
return a + b
While docker-compose.yml is something like this:
[...]
tfcd_rabbit:
container_name: tfcd_rabbit
hostname: tfcd_rabbit
image: rabbitmq:3.8.11-management
environment:
- RABBITMQ_ERLANG_COOKIE=test
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
ports:
- 5672:5672
- 15672:15672
networks:
- tfcd
tfcd_worker_a:
container_name: tfcd_worker_a
hostname: tfcd_worker_1
image: test_flask_celery_docker
entrypoint: celery
command: -A worker_a worker -l INFO -Q worker_a
volumes:
- .:/app
links:
- tfcd_rabbit
depends_on:
- tfcd_rabbit
networks:
- tfcd
[...]
The repository with all the files and instructions to run it can be found here.
Would anyone know what might be going on?
Thank you in advance.
After a while, a friend of mine discovered the problem:
The correct queue name was missing when creating the task, because Celery was using the default name "celery" instead of the correct queue name.
The final code is this:
[...]
#celery.task(queue='worker_a')
def add_nums(a, b):
return a + b
I first ssh into the Master Node.
When I run kubectl get svc
I get the output for NAME, TYPE, CLUSTER-IP, EXTERNAL-IP, PORT(S), AGE:
python-app-service LoadBalancer 10.110.157.42 <pending> 5000:30008/TCP 68m
I then run curl 10.110.157.52:5000
and I get the following message:
curl: (7) Failed connect to 10.110.157.42:5000; Connection refused
Below I posted my Dockerfile, deployment file, service file, and python application file. When I run the docker image, it works fine. However when I try to apply a Kubernetes service to the pod, I am unable to make calls. What am I doing wrong? Also please let me know if I left out any necessary information. Thank you!
Kubernetes was created with KubeAdm using Flannel CNI
Deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-api
labels:
app: my-python-app
type: back-end
spec:
replicas: 1
selector:
matchLabels:
app: my-python-app
type: backend
template:
metadata:
name: python-api-pod
labels:
app: my-python-app
type: backend
spec:
containers:
- name: restful-python-example
image: mydockerhub/restful-python-example
ports:
- containerPort: 5000
Service yaml file:
apiVersion: v1
kind: Service
metadata:
name: python-app-service
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 5000
nodePort: 30008
selector:
app: my-python-app
type: backend
Python application source - restful.py:
#!/usr/bin/python3
from flask import Flask, jsonify, request, abort
from flask_restful import Api, Resource
import jsonpickle
app = Flask(__name__)
api = Api(app)
# Creating an empty dictionary and initializing user id to 0.. will increment every time a person makes a POST request.
# This is bad practice but only using it for the example. Most likely you will be pulling this information from a
# database.
user_dict = {}
user_id = 0
# Define a class and pass it a Resource. These methods require an ID
class User(Resource):
#staticmethod
def get(path_user_id):
if path_user_id not in user_dict:
abort(400)
return jsonify(jsonpickle.encode(user_dict.get(path_user_id, "This user does not exist")))
#staticmethod
def put(path_user_id):
update_and_add_user_helper(path_user_id, request.get_json())
#staticmethod
def delete(path_user_id):
user_dict.pop(path_user_id, None)
# Get all users and add new users
class UserList(Resource):
#staticmethod
def get():
return jsonify(jsonpickle.encode(user_dict))
#staticmethod
def post():
global user_id
user_id = user_id + 1
update_and_add_user_helper(user_id, request.get_json())
# Since post and put are doing pretty much the same thing, I extracted the logic from both and put it in a separate
# method to follow DRY principles.
def update_and_add_user_helper(u_id, request_payload):
name = request_payload["name"]
age = request_payload["age"]
address = request_payload["address"]
city = request_payload["city"]
state = request_payload["state"]
zip_code = request_payload["zip"]
user_dict[u_id] = Person(name, age, address, city, state, zip_code)
# Represents a user's information
class Person:
def __init__(self, name, age, address, city, state, zip_code):
self.name = name
self.age = age
self.address = address
self.city = city
self.state = state
self.zip_code = zip_code
# Add a resource to the api. You need to give the class name and the URI.
api.add_resource(User, "/users/<int:path_user_id>")
api.add_resource(UserList, "/users")
if __name__ == "__main__":
app.run()
Dockerfile:
FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
CMD python restful.py
kubectl describe svc python-app-service
Name: python-app-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-python-app,type=backend
Type: LoadBalancer
IP: 10.110.157.42
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30008/TCP
Endpoints: 10.244.3.24:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So the reason I was unable to connect was because I never exposed the port in my Dockerfile.
My Dockerfile should have been:
FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
EXPOSE 5000
CMD python restful.py
here's my docker compose
version: '2.1'
services:
db:
restart: always
image: nikitph/portcastdbimage:latest
ports:
- "5432:5432"
environment:
- DEBUG = false
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
scraper:
build: .
restart: always
links:
- db
environment:
- DB_HOST = db
- BAR = FOO
depends_on:
db:
condition: service_healthy
command: [ "python3", "./cycloneprocess.py" ]
Now from what I have gleaned from stack overflow, there are two options to access this db from a different container
a) use env variable
self.connection = psycopg2.connect(host=os.environ["DB_HOST"], user=username, password=password, dbname=database)
print(os.environ["DB_HOST"]) gives me 'db'. i dont know if thats expected
b) directly use the 'db'
self.connection = psycopg2.connect(host='db', user=username, password=password, dbname=database)
none of them seem to be working as no data gets populated. everything works locally so i m quiet confident my code is accurate All variables like user etc have been checked and rechecked & they work locally. Would really appreciate any help. Everything is on same network btw.