Good Morning,
I am having the following issue with my Docker container and pyodbc / unixodbc-dev.
When running my Python API connecting to my Docker container I get the following error message--
(pyodbc.Error) ('01000', "[01000] [unixODBC][Driver
Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)"
Connecting to the same API using my local debug instance everything is working fine -- I can submit a string for searching in the backend database and I get results returned and sent back to the Postman UI.
I see that unixodbc-dev dev 2.3.6-0.1 amd64 installed in the Docker image and I noticed that unixODBC is at 2.3.11 - don't know if there might be any issue with that but that being said our Moonshot instances can't connect to http://deb.debian.org and to get our security group to open it up is next to impossible.
All this being said I'm wondering if I have something configured wrong in my Docker container that is causing my issues. I'm new to the Docker container world so this is definitely a learn as I go.
TIA,
Bill Youngman
I was able to get this figured out - thanks to m.b. for the solution I was looking for.
I was able to take the Debian suggestion from this Install ODBC driver in Alpine Linux Docker Container and modify it for my needs.
This is the code that I used to meet my requirements of downloading unixOdbc as well as downloading and installing the MS Sql ODBC driver.
FROM python:3.8.3
ARG ENV DEBIAN_FRONTEND noninteractive
# install Microsoft SQL Server requirements.
ENV ACCEPT_EULA=Y
RUN apt-get update -y && apt-get update \
&& apt-get install -y --no-install-recommends curl gcc g++ gnupg unixodbc-dev
# Add SQL Server ODBC Driver 17 for Ubuntu 18.04
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends --allow-unauthenticated msodbcsql17 mssql-tools \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
# clean the install.
RUN apt-get -y clean
Once this was accomplished and I built and deployed my container everything worked perfectly.
-Bill
Related
I have been using pyodbc on one of my Databricks clusters and have been installing it using this shell command running in the first cell of my notebook:
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
apt-get update
ACCEPT_EULA=Y apt-get install msodbcsql17
apt-get -y install unixodbc-dev
sudo apt-get install python3-pip -y
pip3 install --upgrade pyodbc
This works fine but I have to execute it each time I run the cluster and intend to use pyodbc. I have been doing this by including this piece of code as the first cell of each notebook that uses pyodbc. To fix this I tried to save this code as a .sh file, uploaded it to dbfs, and then added it as one of my cluster's init files. Upon running the code given below:
cnxn1 = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+jdbcHostname+';DATABASE='+jdbcDatabase+';UID='+username1+';PWD='+ password1)
I get the following error:
('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)")
What is it that I am doing wrong with my shell commands/init script that's causing this issue. Any help would be greatly appreciated. Thanks!
This is the recommended way of doing it.
Create the file like this :
dbutils.fs.put("dbfs:/databricks/scripts/pyodbc-install.sh","""
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
apt-get update
ACCEPT_EULA=Y apt-get install msodbcsql17
apt-get -y install unixodbc-dev
sudo apt-get install python3-pip -y
pip3 install --upgrade pyodbc""", True)
Then go to your cluster configuration page.
Click on Edit:
Go down and expand Advanced Options > Init Scripts
There you can add the path of the script :
Then you can click on Confirm.
Now, this script will be executed at the start of your cluster and will make pyodbc available on all notebooks attached to it.
Is it how you did it ?
I'm trying to make a simple MS SQL Server call from Python by using Docker. The SQL connection is able to establish if I run the python execute script, but it won't work from Docker.
My code is below
Dockerfile
from python:3
WORKDIR /code
COPY requirements.txt .
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install freetds-dev -y \
&& apt-get install freetds-bin -y \
&& apt-get install tdsodbc -y \
&& apt-get install --reinstall build-essential -y
RUN echo "[FreeTDS]\n\
Description = FreeTDS Driver\n\
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so\n\
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so" >> /etc/odbcinst.ini
#Pip command without proxy setting
RUN pip install -r requirements.txt
COPY src/ .
CMD ["python", "./producer.py"]
producer.py
import pyodbc
connP = pyodbc.connect('driver={FreeTDS};'
'server={MYSERV01\SQLEXPRESS};'
'database=ABCD;'
'uid=****;'
'pwd=*****')
requirement.txt
kafka-python
pyodbc==4.0.28
Error message
I referred to this article and did. I searched online for resolutions and tried several steps, but nothing helped. I'm pretty new to Docker and no experience in Python, so any help would be really good. Thanks in advance!
In your pyodbc.connect try to give the server as '0.0.0.0' instead of any other value.
If you want to debug it from inside the container, then comment the last CMD line of your Dockerfile.
Build your Docker container
docker build -f Dockerfile -t achu-docker-container .
Run your Docker Container
docker run -it achu-docker-container /bin/bash
This will place you inside the container. This is like, ssh to a different machine.
Go to your WORKDIR
cd code
python ./producer.py
What do you get the above above? (If you install any editor using apt-get install vim you will be able to interactively edit the producer.py file and fix your problem from inside the running container.
Then you can move your changes to your source Dockerfile and build a new image and container with it.
I was trying to connect to the local SQL Server database. I referred many articles and figured out that the following code works:
the server should have host.docker.inter,<port_no> -- this was the catch. When it comes to dedicated database where the sql server is different from the docker image, the server name is provided directly, but when both image and database are in same server, the following code works. Please check the port number in the SQL Configuration TCP Address (IP4All)
This question already has answers here:
standard_init_linux.go:211: exec user process caused "exec format error"
(8 answers)
Closed 2 years ago.
I am currently trying to deploy my docker application to the container registry Azure. I am able to run my docker image locally but when I deploy it to azure, it gives me this error:
standard_init_linux.go:207: exec user process caused "exec format error"
Here is my dockerfile:
*Pull a pre-built alpine docker image with nginx and python3 installed
*this image is from docker community, its small so our upload to contain will be faster
FROM tiangolo/uwsgi-nginx-flask:python3.7
FROM ubuntu:latest
ENV LISTEN_PORT=8400
EXPOSE 8400
RUN apt-get update && apt-get install -y /
curl apt-utils apt-transport-https debconf-utils gcc build-essential g++-5\
&& rm -rf /var/lib/apt/lists/*
*adding custom MS repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/19.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql17
*install SQL Server drivers
RUN apt-get update && ACCEPT_EULA=Y apt-get -f install -y unixodbc-dev
*install SQL Server tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
RUN apt-get update && apt-get install -y python3-pip
RUN apt-get update && apt-get install -y libpq-dev
*install additional requirements from a requirements.txt file
COPY requirements.txt /
RUN pip3 install --no-cache-dir -r /requirements.txt
COPY app/. /.
CMD python3 wsgi.py
Because I do not understand how azure calls my Docker images, I kept on trying different CMD versions such as:
CMD ["python3", "wsgi.py", "runserver", "0.0.0.0:8400"]
But to no avail. I looked up on internet for solutions but really could not find any. Is there anyone there that has insights on what I do wrong? Is it essential to create a .sh file ? I am new to linux so any insights will help!
Thanks again!
I've experienced similar issues that was caused by the docker image being built on one architecture (say AMD64) but then failed when trying to run on a different architecture (ARM64).
Look into QEMU
tutorial
docker ref
I am trying to run a Python job using a VM on Azure Batch. It's a simple script to add a line to my Azure SQL Database. I downloaded the ODBC connection string straight from my Azure portal yet I get this error. The strange thing is I can run the script perfectly fine on my own machine. I've configured the VM to install the version of Python that I need and then execute my script - I'm at a complete loss. Any ideas?
cnxn = pyodbc.connect('Driver={ODBC Driver 13 for SQL Server};Server=tcp:svr-something.database.windows.net,fakeport232;Database=db-something-prod;Uid=something#svr-something;Pwd{fake_passwd};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
Traceback (most recent call last):
File "D:\batch\tasks\apppackages\batch_python_test1.02018-11-12-14-
30\batch_python_test\python_test.py", line 12, in
r'Driver={ODBC Driver 13 for SQL Server};Server=tcp:svr-
mydatabase.database.windows.net,'
pyodbc.InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager]
Data
source name not found and no default driver specified (0) (SQLDriverConnect)')
Being new to Azure Batch I didn't realise the virtual machines didn't come with ODBC drivers installed. I wrote a .bat file to install drivers on the node when the pool is allocated. Problem solved.
You have to install the ODBC driver in each compute nodes of the pool.
Put the below commands inside a shell script file startup_tasks.sh:
sudo apt-get -y update;
export DEBIAN_FRONTEND=noninteractive;
sudo apt-get install -y python3-pip;
apt-get install -y --no-install-recommends apt-utils apt-transport-https;
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - ;
curl https://packages.microsoft.com/config/debian/9/prod.list >
/etc/apt/sources.list.d/mssql-release.list ;
sudo apt-get -y update ;
ACCEPT_EULA=Y apt-get -y install msodbcsql17 ;
ACCEPT_EULA=Y apt-get -y install mssql-tools ;
echo 'export PATH=\"$PATH:/opt/mssql-tools/bin\"' >> ~/.bash_profile ;
echo 'export PATH=\"$PATH:/opt/mssql-tools/bin\"' >> ~/.bashrc ;
source ~/.bashrc && sudo apt-get install -y unixodbc unixodbc-dev ;
Give bin/bash -c "startup_tasks.sh" as a startup task in azure batch pool.This will install the ODBC driver 17 in each nodes.
And then in your connection string change the ODBC driver version to 17
I am unable to open the web cam or capture a video in an Ubuntu 16.04 docker container on a Mac OS Sierra 10.12.6 Host.
vid = cv2.VideoCapture(0)
where vid.isOpened() returns a False and ret, img = vid.read() returns False, None
I didnt just use opencv from pip. I compiled cv2 from the source. ffmpeg should be installed. Is the container not able to connect to the webcam device?
I tried running the docker with:
docker run -it --privileged --device=/dev/video0:/dev/video0 --entrypoint /bin/bash <ImageID>
My dockerfile is below:
#
# Ubuntu Dockerfile
#
# https://github.com/dockerfile/ubuntu
#
# Pull base image.
FROM ubuntu:16.04
# Install.
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y software-properties-common && \
apt-get install -y byobu curl git htop man unzip vim wget && \
apt-get install -y cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libv4l-dev && \
apt-get install -y python-dev python-numpy libtbb2 libtbb-dev libjpeg8-dev libpng12-dev libtiff5-dev libjasper-dev libdc1394-22-dev && \
apt-get install -y libxvidcore-dev libx264-dev && \
apt-get install -y libgtk-3-dev && \
apt-get install -y libatlas-base-dev gfortran && \
apt-get install -y libopencv-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install python2.7-dev python3.5-dev
# Set environment variables.
ENV HOME /root
# Define working directory.
WORKDIR /root
# Get OpenCV
RUN wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.1.0.zip
RUN unzip opencv.zip
RUN wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.1.0.zip
RUN unzip opencv_contrib.zip
# GET PIP
RUN apt-get update && apt-get install -y python3-pip
RUN pip3 install numpy
# Build OpenCV
## NOTE: cd wont work by itself, need to be with the actual command to be performed
WORKDIR /root/opencv-3.1.0
RUN mkdir build
WORKDIR /root/opencv-3.1.0/build
RUN cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.1.0/modules \
-D BUILD_EXAMPLES=ON \
-D PYTHON_EXECUTABLE=/usr/bin/python3.5 \
-D PYTHON_PACKAGES_PATH=/usr/local/lib/python3.5/site-packages \
-D PYTHON_NUMPY_INCLUDE_DIR=/usr/local/lib/python3.5/dist-packages/numpy/core/include ..
RUN make -j4
RUN make install
RUN ldconfig
## RENAME BUILT cv2.so
WORKDIR /usr/local/lib/python3.5/dist-packages
RUN mv cv2.cpython-35m-x86_64-linux-gnu.so cv2.so
WORKDIR /usr/local/lib/python3.5/site-packages
RUN ln -s /usr/local/lib/python3.5/dist-packages/cv2.so cv2.so
## DELETE DIRECTORIES
# WORKDIR /root
# RUN rm -rf opencv-3.1.0 opencv_contrib-3.1.0 opencv.zip opencv_contrib.zip
# Define default command.
CMD ["bash"]
You cannot do this using Docker for Mac. The problem is that Docker runs Hyperkit which in turn is based on Xhyve
If you read the Readme of xhyve
Notably absent are sound, USB, HID and any kind of graphics support. With a focus on server virtualization this is not strictly a requirement. bhyve may gain desktop virtualization capabilities in the future but this doesn't seem to be a priority.
So your docker container which is running inside the Hyperkit VM will never be able to access the device.
Your --device=/dev/video0:/dev/video0 is just mapping a device from inside the container and it may not be there in the VM at all.
So what are you alternatives? Instead of using Docker for Mac, use VirtualBox or VMWare fusion. Create a Ubuntu 16.04 or any other supported OS VM inside it. Shared the webcam device with the VM using settings for that VM. Now your VM OS will have the device.
Inside your VM install docker and access the device.
I noticed the problem that Docker running on macOS can't use the camera. I do not feel like installing VM on my mac, but I want to use the convenience of Docker, so I designed the technical plan, which reads images from mac itself, and use docker container as a server.
The general:
Use docker as a service, listen to a port, receive and process images
Use macOS to read images, and give image analysis tasks to Docker container
The specific:
Read images from macOS, which's not difficult
Send images to Docker container via inter-process communication(IPC) like socket (I don't quite understand this so much now, I will research a little about it)
Process images use some specific algorithms
Return result to macOS host via IPC
Other easy jobs on mac
The method could be concluded as this image
I haven't implemented it, but I searched there are some materials that may help, I list below:
Machine Learning Deployment [blog|code|code1]
MMS