I'm trying to connect a python container with SQL management studio in my host (Windows). I'm using port 1444 that is the port of my database and it give me this error.
Traceback (most recent call last):
File "/app/main.py", line 3, in <module>
connection = pyodbc.connect('Driver={FreeTDS};'
pyodbc.OperationalError: ('08S01', '[08S01] [FreeTDS][SQL Server]Unable to connect: Adaptive Server is unavailable or does not exist (20009) (SQLDriverConnect)')
The main.py code is:
import pyodbc
connection = pyodbc.connect('Driver={FreeTDS};'
'Server=DESKTOP-xxx\SQLEXPRESS,1444;'
'Database=DBxx;'
'UID=username;'
'PWD=password')
cursor = connection.cursor()
cursor.execute("SELECT [Id],[c1] FROM [DBxx].[dbo].[TABLExx]")
for row in cursor.fetchall():
print(row.Name)
Dockerfile:
FROM python
EXPOSE 1444
WORKDIR /app
ADD requirements.txt .
ADD . /app
ADD odbcinst.ini /etc/odbcinst.ini
RUN apt-get update
RUN apt-get install gcc -y
RUN apt-get install -y tdsodbc unixodbc-dev
RUN apt install unixodbc -y
RUN apt-get clean -y
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN echo "[FreeTDS]\n\
Description = FreeTDS Driver\n\
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so\n\
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so" >> /etc/odbcinst.ini
# Run app.py when the container launches
CMD ["python", "main.py"]
And finally the requirements:
pyodbc
Related
My Python container is throwing the error below when trying to connect to my SQL DB hosted on a server:
mariadb.OperationalError: Can't connect to server on 'xxxxxxxxxxx.jcloud-ver-jpc.ik-server.com' (115)
I am trying to run my container from my server as well. If I run the exact same container from my machine, I can connect to the SQL DB.
I am new to Docker, so just for info, here is my Dockerfile:
FROM python:3.10-slim-buster
WORKDIR /app
COPY alpha_gen alpha_gen
COPY poetry.lock .
COPY pyproject.toml .
# install basic utils
RUN apt-get update
RUN apt-get install curl -y
RUN apt-get install gcc -y
# install MariaDB connector
RUN apt install wget -y
RUN wget https://downloads.mariadb.com/MariaDB/mariadb_repo_setup
RUN chmod +x mariadb_repo_setup
RUN ./mariadb_repo_setup \ --mariadb-server-version="mariadb-10.6"
RUN apt install libmariadb3 libmariadb-dev -y
# install poetry
RUN curl -sSL https://install.python-poetry.org | python3 -
ENV PATH="${PATH}:/root/.local/bin"
RUN poetry config virtualenvs.in-project true
# install dependencies
RUN poetry install
CMD poetry run python alpha_gen/main.py --load_pre_process
Any ideas ?
Problem solved. Apparently there is a private port to use for communication from the server and a public port for communication from outside the server. I was using the public one so it was not working.
I am using python scripts to create LDAP connection, I have created an empty postgres database in which in want to insert some value, I am not able to connect to the postgres db inside from docker container.
Traceback (most recent call last):
File "/usr/src/app/ldap_sync_users.py", line 41, in <module>
connection = psycopg2.connect(user = psqlUser,
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "10.120.16.12 " to address: Name or service not known
Dockerfile looks like this
FROM python:3
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y \
libsasl2-dev \
python-dev \
libldap2-dev \
libssl-dev \
python-pip
RUN pip install python-ldap
RUN pip install psycopg2
COPY ldap_file.py ./
CMD python ldap_sync_users.py
and i use another docker script file
#!/bin/bash
docker run -i \
--name ldap_sync \
--env-file=environment \
--add-host=database:10.120.16.12 \
ldapsync
I'm trying to make a simple MS SQL Server call from Python by using Docker. The SQL connection is able to establish if I run the python execute script, but it won't work from Docker.
My code is below
Dockerfile
from python:3
WORKDIR /code
COPY requirements.txt .
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install freetds-dev -y \
&& apt-get install freetds-bin -y \
&& apt-get install tdsodbc -y \
&& apt-get install --reinstall build-essential -y
RUN echo "[FreeTDS]\n\
Description = FreeTDS Driver\n\
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so\n\
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so" >> /etc/odbcinst.ini
#Pip command without proxy setting
RUN pip install -r requirements.txt
COPY src/ .
CMD ["python", "./producer.py"]
producer.py
import pyodbc
connP = pyodbc.connect('driver={FreeTDS};'
'server={MYSERV01\SQLEXPRESS};'
'database=ABCD;'
'uid=****;'
'pwd=*****')
requirement.txt
kafka-python
pyodbc==4.0.28
Error message
I referred to this article and did. I searched online for resolutions and tried several steps, but nothing helped. I'm pretty new to Docker and no experience in Python, so any help would be really good. Thanks in advance!
In your pyodbc.connect try to give the server as '0.0.0.0' instead of any other value.
If you want to debug it from inside the container, then comment the last CMD line of your Dockerfile.
Build your Docker container
docker build -f Dockerfile -t achu-docker-container .
Run your Docker Container
docker run -it achu-docker-container /bin/bash
This will place you inside the container. This is like, ssh to a different machine.
Go to your WORKDIR
cd code
python ./producer.py
What do you get the above above? (If you install any editor using apt-get install vim you will be able to interactively edit the producer.py file and fix your problem from inside the running container.
Then you can move your changes to your source Dockerfile and build a new image and container with it.
I was trying to connect to the local SQL Server database. I referred many articles and figured out that the following code works:
the server should have host.docker.inter,<port_no> -- this was the catch. When it comes to dedicated database where the sql server is different from the docker image, the server name is provided directly, but when both image and database are in same server, the following code works. Please check the port number in the SQL Configuration TCP Address (IP4All)
I have two docker containers running in an Ubuntu 16.04 machine, one docker container has a mysql sever running, the other container holds a dockerized python script set to run a cron job every minute that loads data into mysql. How can I connect the two to load data through the python script into the mysql container? I have an error showing up:
Here are my relevant commands:
MYSQL container runs without issue:
docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=yourPassword --name icarus -d mysql_docker_image
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
927e50ca0c7d mysql_docker_image "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp, 33060/tcp icarus
Second container holds cron and python script:
#build the container without issue
sudo docker run -t -i -d docker-cron
#exec into it to check logs
sudo docker exec -i -t container_id /bin/bash
#check logs
root#b149b5e7306d:/# cat /var/log/cron.log
Error:
have the following error showing up, which I believe has to do with wrong host address:
Caught this error: OperationalError('(pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'localhost\' ([Errno 99] Cannot assign requested address)")',)
Python Script:
from traffic.data import opensky
from sqlalchemy import create_engine
#from sqlalchemy_utils import database_exists, create_database
import sqlalchemy
import gc
#connection and host information
host = 'localhost'
db='icarus'
engine = create_engine('mysql+pymysql://root:password#'+ host+ ':3306/'+ db) #create engine connection
version= sys.version_info[0]
#functions to upload data
def upload(df,table_name):
df.to_sql(table_name,con=engine,index=False,if_exists='append')
engine.dispose()
print('SUCCESSFULLY LOADED DATA INTO STAGING...')
#pull data drom api
sv = opensky.api_states()
final_df = sv.data
#quick column clean up
print(final_df.head())
final_df=final_df.rename(columns = {'timestamp':'time_stamp'})
#insert data to staging
try:
upload(final_df, 'flights_stg')
except Exception as error:
print('Caught this error: ' + repr(error))
del(final_df)
gc.collect()
I'm assuming the error is the use of 'localhost' as my address? How would i go about resolving something like this?
More information:
MYSQL Dockerfile:
FROM mysql
COPY init.sql /docker-entrypoint-initdb.d
Python Dockerfile:
FROM ubuntu:latest
WORKDIR /usr/src/app
#apt-get install -y build-essential -y python python-dev python-pip python-virtualenv libmysqlclient-dev curl&& \
RUN \
apt-get update && \
apt-get install -y build-essential -y git -y python3.6 python3-pip libproj-dev proj-data proj-bin libgeos++-dev libmysqlclient-dev python-mysqldb curl&& \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt ./
RUN pip3 install --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt
RUN pip3 install --upgrade setuptools
RUN pip3 install git+https://github.com/xoolive/traffic
COPY . .
# Install cron
RUN apt-get update
RUN apt-get install cron
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/simple-cron
# Add shell script and grant execution rights
ADD script.sh /script.sh
RUN chmod +x /script.sh
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/simple-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
Can you share your dockerfile or compose file from MySQL container. Yes, the problem related to using localhost as host. You must use a docker service name as host. So in docker service name works as DNS. For example if your docker-compose looks like:
services:
mydb:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD:root
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: root
You must use mydb instead of localhost
Looks like a user defined network bridge was the way to go here as recommended. Solved issue by:
docker network create my-net
docker create --name mysql \
--network my-net \
--publish 3306:3306 \
-e MYSQL_ROOT_PASSWORD=password \
mysql:latest
docker create --name docker-cron \
--network my-net \
docker-cron:latest
docker start each of them and using the --name as the host was perfect.
I am trying to run a Python job using a VM on Azure Batch. It's a simple script to add a line to my Azure SQL Database. I downloaded the ODBC connection string straight from my Azure portal yet I get this error. The strange thing is I can run the script perfectly fine on my own machine. I've configured the VM to install the version of Python that I need and then execute my script - I'm at a complete loss. Any ideas?
cnxn = pyodbc.connect('Driver={ODBC Driver 13 for SQL Server};Server=tcp:svr-something.database.windows.net,fakeport232;Database=db-something-prod;Uid=something#svr-something;Pwd{fake_passwd};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
Traceback (most recent call last):
File "D:\batch\tasks\apppackages\batch_python_test1.02018-11-12-14-
30\batch_python_test\python_test.py", line 12, in
r'Driver={ODBC Driver 13 for SQL Server};Server=tcp:svr-
mydatabase.database.windows.net,'
pyodbc.InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager]
Data
source name not found and no default driver specified (0) (SQLDriverConnect)')
Being new to Azure Batch I didn't realise the virtual machines didn't come with ODBC drivers installed. I wrote a .bat file to install drivers on the node when the pool is allocated. Problem solved.
You have to install the ODBC driver in each compute nodes of the pool.
Put the below commands inside a shell script file startup_tasks.sh:
sudo apt-get -y update;
export DEBIAN_FRONTEND=noninteractive;
sudo apt-get install -y python3-pip;
apt-get install -y --no-install-recommends apt-utils apt-transport-https;
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - ;
curl https://packages.microsoft.com/config/debian/9/prod.list >
/etc/apt/sources.list.d/mssql-release.list ;
sudo apt-get -y update ;
ACCEPT_EULA=Y apt-get -y install msodbcsql17 ;
ACCEPT_EULA=Y apt-get -y install mssql-tools ;
echo 'export PATH=\"$PATH:/opt/mssql-tools/bin\"' >> ~/.bash_profile ;
echo 'export PATH=\"$PATH:/opt/mssql-tools/bin\"' >> ~/.bashrc ;
source ~/.bashrc && sudo apt-get install -y unixodbc unixodbc-dev ;
Give bin/bash -c "startup_tasks.sh" as a startup task in azure batch pool.This will install the ODBC driver 17 in each nodes.
And then in your connection string change the ODBC driver version to 17