This is the first time that I'm trying to use pyodbc to connect to an Azure SQL Database within a docker image. My Dockerfile looks like the below:
# the base image
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
COPY music_trends.py ./
# install SQL Server drivers
RUN apt-get update
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql unixodbc-dev
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./music_trends.py" ]
Which throws the error message:
E: Unable to locate package msodbcsql
The command '/bin/sh -c apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql unixodbc-dev' returned a non-zero code: 100
I have found resolutions for ubuntu:16.04 such as: https://github.com/Azure/azure-functions-docker/pull/45 and have also tried to run the msodbcsql.msi files from my Dockerfile.
Is there an equivalent fix for python:3?
python:3 is based on debian, so refer to microsoft doc:
You should install microsoft apt source, meanwhile change msodbcsql to msodbcsql17, example as next:
Dockerfile:
FROM python:3
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt-get update && \
ACCEPT_EULA=Y apt-get install msodbcsql17 unixodbc-dev -y
UPDATE 2019-07-26:
I didn't notice official python:3 image update from debian 9 to debian 10 early this month, see this
From microsoft guide above, it seems currently they just package every dependency ok for next:
#Debian 8
curl https://packages.microsoft.com/config/debian/8/prod.list > /etc/apt/sources.list.d/mssql-release.list
#Debian 9
curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list
Of course you can handle debian 10 dependency by yourself, such as libcrypto.so version issue etc, but I still suggest you just use python3 debian 9 version as microsoft did everything for you (PS: I think they will update in near future just because debian 10 release half month ago, I guess they need some time. BTW, https://packages.microsoft.com/config/debian/10/prod.list is there, but it does not have the package msodbcsql17 currently...)
So what I suggest easiest way for you is next, compared to the old Dockerfile, just change python:3 to python:3-stretch, and also install apt-transport-https which default not installed in debian 9, detail as follows:
Dockerfile:
FROM python:3-stretch
RUN apt-get update && \
apt-get install -y apt-transport-https && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt-get update && \
ACCEPT_EULA=Y apt-get install msodbcsql17 unixodbc-dev -y
.so check:
root#91addb538736:/# ldd /opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.3.so.1.1
linux-vdso.so.1 (0x00007ffd72bd0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f4892696000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f489248e000)
libodbcinst.so.2 => /usr/lib/x86_64-linux-gnu/libodbcinst.so.2 (0x00007f4892273000)
libcrypto.so.1.0.2 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.2 (0x00007f4891e0d000)
libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007f4891b33000)
libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007f48918e8000)
libssl.so.1.0.2 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2 (0x00007f489167f000)
libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f489147a000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f48910f8000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4890df4000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f4890bdd000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f48909c0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4890621000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4892ca1000)
libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007f48903ee000)
libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007f48901ea000)
libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007f488ffde000)
libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007f488fdda000)
libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f488fbc3000)
Related
I'm going to run a project which runs with docker and when I run the following command: docker-compose -f local.yml up --build . I am canceling a process and I am presented with an error as seen below from this point I am presented with the error
I am working with windows 10, in a project that contains python, Django, Vue, docker and I already installed the requirements and still this error appears.
`
` => CANCELED [ares_local_django 2/11] RUN apt-get update && apt-get install -y build-essential && apt-get install -y libpq-dev && apt-get install -y gett 18.6s
=> ERROR [ares_local_vue 2/6] RUN apk --no-cache add shadow rsync && mkdir /app 16.6s
=> CACHED [ares_production_postgres 2/4] COPY ./compose/production/postgres/maintenance /usr/local/bin/maintenance 0.0s
=> CACHED [ares_production_postgres 3/4] RUN chmod +x /usr/local/bin/maintenance/* 0.0s
=> CACHED [ares_production_postgres 4/4] RUN mv /usr/local/bin/maintenance/* /usr/local/bin && rmdir /usr/local/bin/maintenance 0.0s
=> [ares_production_postgres] exporting to image 1.5s
=> => exporting layers 0.0s
=> => writing image sha256:0fc325a12edf89cce8cfd203af7b9ac57125b703a0a48661c6a6cd1808370474 0.3s
=> => naming to docker.io/library/ares_production_postgres 0.0s
------
> [ares_local_vue 2/6] RUN apk --no-cache add shadow rsync && mkdir /app:
#0 6.000 fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
#0 11.04 fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
#0 11.04 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.15/main: temporary error (try again later)
#0 16.04 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.15/community: temporary error (try again later)
#0 16.04 ERROR: unable to select packages: 0.0s
#0 16.06 rsync (no such package):
#0 16.06 required by: world[rsync]
#0 16.06 shadow (no such package):
#0 16.06 required by: world[shadow]
------
failed to solve: executor failed running [/bin/sh -c apk --no-cache add shadow rsync && mkdir /app]: exit code: 2`
I am using Windows in my local machine and i use Spyder 5.1.5 in Anaconda. Within Spyder, the python version 3.9.7; IPython version is 7.29.0.
When i ran the code below with my local machine, I never ran into the problem below.
Problem:
I installed the same version of python in docker (Python 3.9.7 from here).
This is what the dataframe df_ts looks like
0
event123 2019-04-01 09:30:00.635
event000 2019-04-01 09:32:56.417
df_ts.dtypes
0 datetime64[ns]
dtype: object
When i tried to run the line below within docker
df_ts.idxmin(axis=0).values[0]
I got the error below. I am expecting it to return the index of min here. Note that I never got any error, if I run it within my local machine not docker.
I am starting to wonder if the python version 3.9.7 I installed in docker is the same as the python 3.9.7 in Spyder.
TypeError: reduction operation 'argmin' not allowed for this dtype
This is how my dockerfile looks like:
FROM centos:latest
RUN dnf --disablerepo '*' --enablerepo=extras swap centos-linux-repos centos-stream-repos -y && \
dnf distro-sync -y
RUN yum -y install epel-release && \
yum -y update && \
yum groupinstall "Development Tools" -y && \
yum install openssl-devel libffi-devel bzip2-devel -y
RUN yum install wget -y && \
wget https://www.python.org/ftp/python/3.9.7/Python-3.9.7.tgz && \
tar xvf Python-3.9.7.tgz && \
cd Python-3.9*/ && \
./configure --enable-optimizations && \
make altinstall
RUN ln -s /usr/local/bin/python3.9 /usr/local/bin/python && \
ln -s /usr/local/bin/pip3.9 /usr/local/bin/pip3
ARG USER=centos
ARG V_ENV=boto3venv
ARG VOLUME=/home/${USER}/app-src
It used to take ~5 minutes for our Airflow deployment's docker image to build, and all of a sudden it is taking over an hour. With that said we haven't had to rebuild our image in a few months, so not sure when the issue came to be...
It looks like https://stackoverflow.com/questions/65122957/resolving-new-pip-backtracking-runtime-issue is the culprit. We're seeing a lot of warnings that look like this during build:
=> => # Downloading google_cloud_os_login-2.3.1-py2.py3-none-any.whl (42 kB)
=> => # INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints
=> => # to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press
=> => # Ctrl + C.
=> => # Downloading google_cloud_os_login-2.2.1-py2.py3-none-any.whl (41 kB)
=> => # Downloading google_cloud_os_login-2.2.0-py2.py3-none-any.whl (44 kB)
Here is the line in our Dockerfile that is taking the hour+
RUN set -ex \
&& buildDeps=' \
freetds-dev \
libkrb5-dev \
libsasl2-dev \
libssl-dev \
libffi-dev \
libpq-dev \
git \
' \
&& apt-get update -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
freetds-bin \
build-essential \
apt-utils \
curl \
rsync \
netcat \
locales \
&& sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \
&& useradd -ms /bin/bash -d ${AIRFLOW_USER_HOME} airflow \
&& pip install -U pip setuptools wheel \
&& pip install pytz \
&& pip install pyOpenSSL \
&& pip install ndg-httpsclient \
&& pip install pyasn1 \
&& pip install apache-airflow[crypto,postgres,slack,kubernetes,gcp,docker,ssh]==${AIRFLOW_VERSION} \
&& if [ -n "${PYTHON_DEPS}" ]; then pip install ${PYTHON_DEPS}; fi \
&& apt-get purge --auto-remove -yqq $buildDeps \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf \
/tmp/* \
/var/tmp/* \
/usr/share/man \
/usr/share/doc \
/usr/share/doc-base \
/var/lib/apt/lists/*
...
...
COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
and here is our requirements.txt
google-cloud-core==1.4.1
google-cloud-datastore==1.15.0
gcsfs==0.6.1
flatten-dict==0.4.2
bigquery_schema_generator==1.4
backoff==1.11.1
six==1.13.0
ndjson==0.3.1
pymongo==3.1.2
SQLAlchemy==1.3.15
pandas==1.3.1
numpy==1.21.1
billiard
I am actually quite confused about this specific warning message referring to google_cloud_os_login because the build step that is hanging is the line I shared starting with RUN set -ex, which doesn't look to have any google cloud installations? We install some google cloud stuff via requirements.txt (-core, -datastore), but the lines to COPY and RUN pip install on requirements.txt are much lower in our dockerfile (as indicated by the ...). These warnings pop up for many libraries, however it does seem like this google_cloud_os_login is a major culprit taking a significant amount of time.
Where in the RUN set -ex ... command is it prompting to install google_cloud_os_login? And how can we set a specific version number on this library in order to speed up the build of this docker image?
I think the various google packages you're seeing are dependencies of apache-airflow[gcp].
To speed up the install, the documentation recommends you use one of the constraint files they provide. They create tags named constraints-<version> that contain files you can pass to pip with --constraint.
For example, when trying to install 2.2.0, there is a constraints-2.2.0 tag. In this tag's file tree, you'll see files like constraints-3.8.txt, where 3.8 is the python version I'm using.
pip install apache-airflow[gcp]==2.2.0 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.8.txt"
I have a python script where various ping check has been performed . The same py file runs perfectly when triggered locally , but when i run the job from jenkins after building docker image the below error comes up and ping check always return false.
sh: 1: ping: not found
Can someone help me why this error is coming, do I need to update dockerfile with some update or some jenkins configuration is required.
Updating docker file
FROM ubuntu:18.04
ADD requirements.txt /requirements.txt
ADD jenkins_bot.py /jenkins_bot.py
ADD src /src
ADD config /config
ADD run_unit_tests.sh /run_unit_tests.sh
ADD utests /utests
WORKDIR /
RUN DEBIAN_FRONTEND=noninteractive && apt-get -qq update && \
apt-get -y install apt-utils && \
apt-get -qqy install ssh && \
apt-get -qqy install build-essential \
python3-dev \
python3-setuptools \
libfreetype6-dev \
libxft-dev && \
apt-get -qqy install python3-pip && \
pip3 install -r /requirements.txt
#RUN sh run_unit_tests.sh
Thanks in advance.
Ping is missing from in the ubuntu base image. You can update your dockerfile as follows to install it:
FROM ubuntu:18.04
ADD requirements.txt /requirements.txt
ADD jenkins_bot.py /jenkins_bot.py
ADD src /src
ADD config /config
ADD run_unit_tests.sh /run_unit_tests.sh
ADD utests /utests
WORKDIR /
RUN DEBIAN_FRONTEND=noninteractive && apt-get -qq update && \
apt-get -y install apt-utils && \
apt-get install -y iputils-ping && \
apt-get -qqy install ssh && \
apt-get -qqy install build-essential \
python3-dev \
python3-setuptools \
libfreetype6-dev \
libxft-dev && \
apt-get -qqy install python3-pip && \
pip3 install -r /requirements.txt
#RUN sh run_unit_tests.sh
when trying to install PyV8 in ubuntu, and type the command:
python setup.py build
then it display this error:
error: command 'c++' failed with exit status 1
anybody have solution about this?
Here is what I have in my Dockerfile. The following is tested and runs in production on top of Debian Stretch. I recommend using exactly the PyV8 / V8 setup that I'm using - I've spent at least a week to figure out which combination doesn't lead to memory leaks. I also recommend reading through the discussion and the JSContext fix here and here.
In short, support for PyV8 is almost non-existent - either you use it just as a toy, or you follow exactly this recipe, or you spend a significant amount of time and effort to fork the repo and make it better. If starting fresh, I recommend using Node-JS instead and communicate through some IPC method with Python.
ENV MY_HOME /home/forge
ENV MY_LIB $FORGE_HOME/lib
# preparing dependencies for V8 and PyV8
ENV V8_HOME $MY_LIB/v8
RUN apt-get update && \
apt-get install -y libboost-thread-dev \
libboost-all-dev \
libboost-dev \
libboost-python-dev \
autoconf \
libtool \
systemtap \
scons
# compiling an older version of boost, required for this version of V8
RUN mkdir -p $MY_LIB/boost && cd $MY_LIB/boost && \
wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.gz && tar -xvzf boost_1_54_0.tar.gz && cd $MY_LIB/boost/boost_1_54_0 && \
./bootstrap.sh && \
./b2 install --prefix=/usr/local --with-python --with-thread && \
ldconfig && \
ldconfig /usr/local/lib
# preparing gcc 4.9 - anything newer will lead to errors with the V8 codebase
ENV CC "gcc-4.9"
ENV CPP "gcc-4.9 -E"
ENV CXX "g++-4.9"
ENV PATH_BEFORE_V8 "${MY_HOME}/bin:${PATH}"
ENV PATH "${MY_HOME}/bin:${PATH}"
RUN echo "deb http://ftp.us.debian.org/debian/ jessie main contrib non-free" >> /etc/apt/sources.list && \
echo "deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free" >> /etc/apt/sources.list && \
apt-get update && \
apt-get install -y gcc-4.9 g++-4.9 && \
mkdir -p ${MY_HOME}/bin && cd ${MY_HOME}/bin && \
ln -s /usr/bin/${CC} ${MY_HOME}/bin/gcc && \
ln -s /usr/bin/${CC} ${MY_HOME}/bin/x86_64-linux-gnu-gcc && \
ln -s /usr/bin/${CXX} ${MY_HOME}/bin/g++ && \
ln -s /usr/bin/${CXX} ${MY_HOME}/bin/x86_64-linux-gnu-g++
# compiling a specific version of V8 and PyV8, since older combos lead to memory leaks
RUN git clone https://github.com/muellermichel/V8_r10452.git $V8_HOME && \
git clone https://github.com/muellermichel/PyV8_r429.git $MY_LIB/pyv8 && \
cd $MY_LIB/pyv8 && python setup.py build && python setup.py install
# cleaning up
RUN PATH=${PATH_BEFORE_V8} && \
head -n -2 /etc/apt/sources.list > ${MY_HOME}/sources.list.temp && \
mv ${MY_HOME}/sources.list.temp /etc/apt/sources.list && \
apt-get update
ENV PATH "${PATH_BEFORE_V8}"
ENV CC ""
ENV CPP ""
ENV CXX ""
older version that depends on the now defunct googlecode and was made for Ubuntu 12.04:
export MY_LIB_FOLDER=[PUT-YOUR-DESIRED-INSTALL-PATH-HERE]
apt-get install -y libboost-thread-dev
apt-get install -y libboost-all-dev
apt-get install -y libboost-dev
apt-get install -y libboost-python-dev
apt-get install -y git-core autoconf libtool systemtap
apt-get install -y subversion
apt-get install -y wget
mkdir -p $MY_LIB_FOLDER/boost && cd $MY_LIB_FOLDER/boost && wget http://sourceforge.net/projects/boost/files/boost/1.54.0/boost_1_54_0.tar.gz && tar -xvzf boost_1_54_0.tar.gz
cd $MY_LIB_FOLDER/boost/boost_1_54_0 && ./bootstrap.sh && ./b2 install --prefix=/usr/local --with-python --with-thread && ldconfig && ldconfig /usr/local/lib
svn checkout -r10452 http://v8.googlecode.com/svn/trunk/ $MY_LIB_FOLDER/v8
export V8_HOME=$MY_LIB_FOLDER/v8
svn checkout -r429 http://pyv8.googlecode.com/svn/trunk/ $MY_LIB_FOLDER/pyv8
git clone https://github.com/taguchimail/pyv8-linux-x64.git $MY_LIB_FOLDER/pyv8-taguchimail && cd $MY_LIB_FOLDER/pyv8-taguchimail && git checkout origin/stable
apt-get install -y scons
cd $MY_LIB_FOLDER/pyv8 && patch -p0 < $MY_LIB_FOLDER/pyv8-taguchimail/patches/pyv8.patch && python setup.py build && python setup.py install
Had the same problem and this worked for me:
export LIB=~
apt-get install -y curl libboost-thread-dev libboost-all-dev libboost-dev libboost-python-dev git-core autoconf libtool
svn checkout -r19632 http://v8.googlecode.com/svn/trunk/ $LIB/v8
export V8_HOME=$LIB/v8
svn checkout http://pyv8.googlecode.com/svn/trunk/ $LIB/pyv8 && cd $LIB/pyv8 && python setup.py build && python setup.py install
Solution found in comments here - https://code.google.com/p/pyv8/wiki/HowToBuild
I'm using a Debian based distro. Here's how I installed PyV8 (you'll need to
have git installed):
cd /usr/share
sudo git clone https://github.com/emmetio/pyv8-binaries.git
cd pyv8-binaries/
sudo unzip pyv8-linux64.zip
sudo cp -a PyV8.py _PyV8.so /usr/bin