Docker compose available in python without internet access - python

I'm having the issue needing to use docker compose in python (for using the docker_service functionality in Ansible), but its not possible to install using pip because of a network policy of the company (the VM has no network access only acces to a RPM). I although can use a yum repository that contains docker compose.
What I tried is to install "docker compose" (version 1.18.0) using yum. Although python is not recognizing docker compose and suggest me to use pip: "Unable to load docker-compose. Try pip install docker-compose
Since in most cases I can solve this issue by installing this using yum install python-, I already looked the web for a package called python-docker-compose but no result :(
minimalistic ansible script for test:
- name: Run using a project directory
hosts: localhost
gather_facts: no
tasks:
- docker_service:
project_src: flask
state: absent
hope anyone can help.
SOLUTION:
After some digging around I solved the issue by doing a local download on a machine that has internet:
pip download -d /tmp/docker-compose-local docker-compose
archiving all the packages that were downloaded in the folder
cd tmp
tar -czvf docker-compose-python.tgz ./docker-compose-local
since the total size of the package is slightly bigger than 1MB I added the file to the ansible docker role.
In the docker role a local install is done:
cd /tmp
tar -xzvf docker-compose-python.tgz pip install --no-index
--find-links file:/tmp/docker-compose-local/ docker_compose

Use a virtual environment!
unless you can not do that either, it depends whether the company policy is NOT to write on everyone's python (then you are fine) or whether you can not use pip (even in your own environment).
If you CAN do that then:
virtualenv docker_compose -p python3
source docker_compose/bin/activate
pip install docker-compose
You get all this junk:
Collecting docker-compose
Downloading https://files.pythonhosted.org/packages/67/03Collecting docker-pycreds>=0.3.0 (from docker<4.0,>=3.4.1->docker-compose)
Downloading https://files.pythonhosted.org/packages/ea/bf/7e70aeebc40407fbdb96fa9f79fc8e4722ea889a99378303e3bcc73f4ab5/docker_pycreds-0.3.0-py2.py3-none-any.whl
Building wheels for collected packages: PyYAML, docopt, texttable, dockerpty
Running setup.py bdist_wheel for PyYAML ... done
Stored in directory: /home/eserrasanz/.cache/pip/wheels/ad/da/0c/74eb680767247273e2cf2723482cb9c924fe70af57c334513f
Running setup.py bdist_wheel for docopt ... done
Stored in directory: /home/eserrasanz/.cache/pip/wheels/9b/04/dd/7daf4150b6d9b12949298737de9431a324d4b797ffd63f526e
Running setup.py bdist_wheel for texttable ... done
Stored in directory: /home/eserrasanz/.cache/pip/wheels/99/1e/2b/8452d3a48dad98632787556a0f2f90d56703b39cdf7d142dd1
Running setup.py bdist_wheel for dockerpty ... done
Stored in directory: /home/eserrasanz/.cache/pip/wheels/e5/1e/86/bd0a97a0907c6c654af654d5875d1d4383dd1f575f77cee4aa
Successfully installed PyYAML-3.13 cached-property-1.5.1 certifi-2018.8.24 chardet-3.0.4 docker-3.5.0 docker-compose-1.22.0 docker-pycreds-0.3.0 dockerpty-0.4.1 docopt-0.6.2 idna-2.6 jsonschema-2.6.0 requests-2.18.4 six-1.11.0 texttable-0.9.1 urllib3-1.22 websocket-client-0.53.0

After some digging around I solved the issue by doing a local download on a machine that has internet:
pip download -d /tmp/docker-compose-local docker-compose
archiving all the packages that were downloaded in the folder
cd tmp
tar -czvf docker-compose-python.tgz ./docker-compose-local
since the total size of the package is slightly bigger than 1MB I added the file to the ansible docker role.
In the docker role a local install is done:
cd /tmp
tar -xzvf docker-compose-python.tgz
pip install --no-index --find-links file:/tmp/docker-compose-local/
docker_compose

I think you should be able to install using curl from GitHub, assuming this is not blocked by your network policy. Link: https://docs.docker.com/compose/install/#install-compose.
sudo curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
# Outputs: docker-compose version 1.22.0, build 1719ceb
Hope this helps.

To complete answer from E.Serra you can use ansible to create the virtualenv, use pip and install docker-compose in one task this way.
- pip:
name: docker-compose
virtualenv: /my_app/venv

Related

DOCKER_BUILDKIT - passing a token secret during build time from github actions

I have a pip installable package that was published using twine to Azure DevOps artifact.
In my build image I need to download that package and install it with pip. So I need to
authenticate against azure artifacts to get it. So I am using artifacts-keyring to do so
The pip installable URL is something like this:
https://<token>#pkgs.dev.azure.com/<org>/<project>/_packaging/<feed>/pypi/simple/ --no-cache-dir <package-name>
I am trying to use Docker BuildKit to pass the token during build time.
I am mounting the secret and using it by getting its value with cat command substitution:
# syntax = docker/dockerfile:1.2
ENV ARTIFACTS_KEYRING_NONINTERACTIVE_MODE=true
RUN --mount=type=secret,id=azdevopstoken,dst=/run/secrets/azdevopstoken \
pip install --upgrade pip && \
pip install pyyaml numpy lxml artifacts-keyring && \
echo pip install -i https://"$(cat /run/secrets/azdevopstoken)"#pkgs.dev.azure.com/org>/<project>/_packaging/feed/pypi/simple/ --no-cache-dir package-name
and it works locally from my workstation when I run:
(My src file where the token is in plain text is azdevopstoken.txt within my local directory structure project)
DOCKER_BUILDKIT=1 docker image build --secret id=azdevopstoken,src=./azdevopstoken.txt --progress plain . -t my-image:tag
Now I am running this build command from GitHub actions pipeline
And I got this output:
Scould not parse secrets: [id=azdevopstoken,src=./azdevopstoken.txt]: failed to stat ./azdevopstoken.txt: stat ./azdevopstoken.txt: no such file or directory
Error: Process completed with exit code 1.
This is expected from me, since I am not uploading azdevopstoken.txt file, because I don’t want to have it in my repository, since there is my token in plain text.
Reading carefully here, I see there is a workaround to encrypt secrets, perhaps this could be a good solution to implement buildkit from my github actions pipeline, but I think this is an additional step in my workflow, so I am not sure whether follow this option or not.
Foremost because I already passing the secret in the old way by using the --build-arg flag during build time of this way:
ARG AZ_DEVOPS_TOKEN
ENV AZ_DEVOPS_TOKEN=$AZ_DEVOPS_TOKEN
RUN pip install --upgrade pip && \
pip install pyyaml numpy lxml artifacts-keyring && \
pip install -i https://$AZ_DEVOPS_TOKEN#pkgs.dev.azure.com/ORG/PROJECT/_packaging/feed/pypi/simple/ --no-cache-dir PACKAGE-NAME
Being my docker build command from GitHub actions of this way:
docker image build --build-arg AZ_DEVOPS_TOKEN="${{ secrets.AZ_DEVOPS_TOKEN }}" . -t containerregistry.azurecr.io/my-image:preview
It works perfect, the thing is, I heard --build-arg is not a safe solution to pass sensitive information. Despite that I ran docker history after this command, and I couldn't see the secret exposed or something similar.
> docker history af-fem-uploader:preview-buildarg
IMAGE CREATED CREATED BY SIZE COMMENT
2eff105408c9 34 seconds ago RUN /bin/sh -c pip install pytest && pytest … 5MB buildkit.dockerfile.v0
<missing> 38 seconds ago WORKDIR /home/site/wwwroot/HttpUploadTrigger/ 0B buildkit.dockerfile.v0
<missing> 38 seconds ago ADD . /home/site/wwwroot # buildkit 1.9MB buildkit.dockerfile.v0
So what is the benefit to pass the secrets via BUILDKIT in order to don’t expose them if I have to upload the file which contains the secret to my repository?
Perhaps I am missing something.

Not able to install any python package in docker container

I am trying to create an docker image with ubutu 16.04 as base. I want to install few python packages like pandas, flask etc. I have kept all packages in "requirements.txt". But when I am trying to build image, I am getting
Could not find a version that satisfies the requirement requests (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for requests (from -r requirements.txt (line 1))
Basically, I have not mentioned any version in "requirements.txt". I guess it should take the latest available and compatible version of that package. But for every package same issue I am getting.
My DockerFile is as follows.
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev build-essential cmake pkg-config libx11-dev libatlas-base-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /testing/requirements.txt
WORKDIR /testing
RUN pip3 install -r requirements.txt
and requirements.txt is.
pandas
requests
PyMySQL
Flask
Flask-Cors
Pillow
face-recognition
Flask-SocketIO
Where I am doing wrong ? Can anybody help ?
I too ran into the same situation. I observed that, python packages is looking for the network within docker. It is thinking that, it is running in a standalone without network so its not able to locate the package. In these type of situations either
No matching distribution found
or sometimes
Retrying ...
error may occur.
I used a --network option in the docker build command like below to overcome this error where the command insists python to use the host network to download the required packages.
docker build --network=host -t tracker:latest .
Try using this:
RUN python3.6 -m pip install --upgrade pip \
&& python3.6 -m pip install -r requirements.txt
by using it in this way, you are specifying the version of python in which you want to search for those packages.
Change it to python3.7 if you wish to use 3.7 version.
I suggest using the official python image instead. As a result, your Dockerfile will now become:
FROM python:3
WORKDIR /testing
COPY ./requirements.txt /testing/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
... etc ...
Now re: Angular/Node. You have two options from here: 1) Install Angular/Node on the Python image; or 2) Use Docker's multi-stage build feature so you build the Angular and Python-specific images before merging them together. Option 2 is recommended but it would take some work. It would probably look like this:
FROM node:8 as node
# Angular-specific build
FROM python:3 as python
# Python-specific build
# Then copy your data from the Angular image to the Python one:
COPY --from=node /usr/src/app/dist/angular-docker /usr/src/app

How content in "XDG_CACHE_HOME=/cache" different from the content in "-d /build"?

Below Dockerfile has environment variable XDG_CACHE_HOME=/cache
that allows command,
pip install -r requirements_test.txt
to utilise local cache(as shown below) instead of downloading from network:
But below Dockerfile also has /build folder.
So, I would like to understand,
if the purpose(content) of /build folder different from /cache folder
Dockerfile
FROM useraccount/todobackend-base:latest
MAINTAINER Development team <devteam#abc.com>
RUN apt-get update && \
# Development image should have access to source code and
# be able to compile python package dependencies that installs from source distribution
# python-dev has core development libraries required to build & compile python application from source
apt-get install -qy python-dev libmysqlclient-dev
# Activate virtual environment and install wheel support
# Python wheels are application package artifacts
RUN . /appenv/bin/activate && \
pip install wheel --upgrade
# PIP environment variables (NOTE: must be set after installing wheel)
# Configure docker image to output wheels to folder called /wheelhouse
# PIP cache location using XDG_CACHE_HOME to improve performance during test/build/release operation
ENV WHEELHOUSE=/wheelhouse PIP_WHEEL_DIR=/wheelhouse PIP_FIND_LINKS=/wheelhouse XDG_CACHE_HOME=/cache
# OUTPUT: Build artifacts (wheels) are output here
# Read more - https://www.projectatomic.io/docs/docker-image-author-guidance/
VOLUME /wheelhouse
# OUTPUT: Build cache
VOLUME /build
# OUTPUT: Test reports are output here
VOLUME /reports
# Add test entrypoint script
COPY scripts/test.sh /usr/local/bin/test.sh
RUN chmod +x /usr/local/bin/test.sh
# Set defaults for entrypoint and command string
ENTRYPOINT ["test.sh"]
CMD ["python", "manage.py", "test", "--noinput"]
# Add application source
COPY src /application
WORKDIR /application
Below is the docker-compose.yml file
test: # Unit & integration testing
build: ../../
dockerfile: docker/dev/Dockerfile
volumes_from:
- cache
links:
- db
environment:
DJANGO_SETTINGS_MODULE: todobackend.settings.test
MYSQL_HOST: db
MYSQL_USER: root
MYSQL_PASSWORD: password
TEST_OUTPUT_DIR: /reports
builder: # Generate python artifacts
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- ../../target:/wheelhouse
volumes_from:
- cache
entrypoint: "entrypoint.sh"
command: ["pip", "wheel", "--non-index", "-f /build", "."]
db:
image: mysql:5.6
hostname: db
expose:
- "3386"
environment:
MYSQL_ROOT_PASSWORD: password
cache: # volume container
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- /tmp/cache:/cache
- /build
entrypoint: "true"
Below volumes
volumes:
- /tmp/cache:/cache
- /build
are created in volume container(cache)
entrypoint file test.sh:
#!/bin/bash
# Activate virtual environment
. /appenv/bin/activate
# Download requirements to build cache
pip download -d /build -r requirements_test.txt --no-input
# Install application test requirements
# -r allows the requirements to be mentioned in a txt file
# pip install -r requirements_test.txt
pip install --no-index -f /build -r requirements_test.txt
# Run test.sh arguments
exec $#
Edit:
pip download -d /build -r requirements_test.txt --no-input storing below files in /build folder
pip install -r requirements_test.txt is picking dependencies from /build folder:
Above two commands are not using /cache folder
1)
So,
Why do we need /cache folder? pip install command is referring to /build
2)
In test.sh file.... From the aspect of using /build vs /cache content...
How
pip install --no-index -f /build -r requirements_test.txt
different from
pip install -r requirements_test.txt command ?
1) They might be the same, but might not as well. As I understand about what's being done here is that, /cache uses your host cache (/tmp/cache is in the host) and then the container builds the cache (using the host cache) and stores it in /build which points to /var/lib/docker/volumes/hjfhjksahfjksa in your host.
So, they might be the same at some point, but not always.
2) This container needs the cache stored in /build, so you need to use the -f flag to let pip know where it's located.
Python has a couple of different formats for packages. They're typically distributed as source code, which can run anywhere Python runs, but occasionally have C (or FORTRAN!) extensions that require an external compiler to build. The current recommended format is a wheel, which can be specific to a particular OS and specific Python build options, but doesn't depend on anything at the OS level outside of Python. The Python Packaging User Guide goes into a lot more detail on this.
The build volume contains .whl files for your application; the wheelhouse volume contains .whl files for other Python packages; the cache volume contains .tar.gz or .whl files that get downloaded from PyPI. The cache volume is only consulted when downloading things; the build and wheelhouse volumes are used to install code without needing to try to download at all.
The pip --no-index option says "don't contact public PyPI"; -f /build says "use artifacts located here". The environment variables mentioning /wheelhouse also have an effect. These combine to let you install packages using only what you've already built.
The Docker Compose setup is a pretty long-winded way to build your application as wheels, and then make it available to a runtime image that doesn't have a toolchain.
The cache container does literally nothing. It has the two directories you show: /cache is a host-mounted directory, and /build is an anonymous volume. Other containers have volumes_from: cache to reuse these volumes. (Style-wise, adding named volumes: to the docker-compose.yml is almost definitely better.)
The builder container only runs pip wheel. It mounts an additional directory, ./target from the point of view of the Dockerfile, on to /wheelhouse. The pip install documentation discusses how caching works: if it downloads files they go into $XDG_CACHE_DIR (the /cache volume directory), and if it builds wheels they go into the /wheelhouse volume directory. The output of pip wheel will go into the /build volume directory.
The test container, at startup time, downloads some additional packages and puts them in the build volume. Then it does pip install --no-index to install packages only using what's in the build and wheelhouse volumes, without calling out to PyPI at all.
This setup is pretty complicated for what it does. Some general guidelines I'd suggest here:
Prefer named volumes to data-volume containers. (Very early versions of Docker didn't have named volumes, but anything running on a modern Linux distribution will.)
Don't establish a virtual environment inside your image; just install directly into the system Python tree.
Install software at image build time (in the Dockerfile), not at image startup time (in an entrypoint script).
Don't declare VOLUME in a Dockerfile pretty much ever; it's not necessary for this setup and when it has effects it's usually more confusing than helpful.
A more typical setup would be to build all of this, in one shot, in a multi-stage build. The one downside of this is that downloads aren't cached across builds: if your list of requirements doesn't change then Docker will reuse it as a set, but if you add or remove any single thing, Docker will repeat the pip command to download the whole set.
This would look roughly like (not really tested):
# First stage: build and download wheels
FROM python:3 AS build
# Bootstrap some Python dependencies.
RUN pip install --upgrade pip \
&& pip install wheel
# This stage can need some extra host dependencies, like
# compilers and C libraries.
RUN apt-get update && \
apt-get install -qy python-dev libmysqlclient-dev
# Create a directory to hold built wheels.
RUN mkdir /wheel
# Install the application's dependencies (only).
WORKDIR /app
COPY requirements.txt .
RUN pip wheel --wheel-dir=/wheel -r requirements.txt \
&& pip install --no-index --find-links=/wheel -r requirements.txt
# Build a wheel out of the application.
COPY . .
RUN pip wheel --wheel-dir=/wheel --no-index --find-links=/wheel .
# Second stage: actually run the application.
FROM python:3
# Bootstrap some Python dependencies.
RUN pip install --upgrade pip \
&& pip install wheel
# Get the wheels from the first stage.
RUN mkdir /wheel
COPY --from=build /wheel /wheel
# Install them.
RUN pip install --no-index --find-links=/wheel /wheel/*.whl
# Standard application metadata.
# The application should be declared as entry_points in setup.py.
EXPOSE 3000
CMD ["the_application"]

Docker image with python3, chromedriver, chrome & selenium

My objective is to scrape the web with Selenium driven by Python from a docker container.
I've looked around for and not found a docker image with all of the following installed:
Python 3
ChromeDriver
Chrome
Selenium
Is anyone able to link me to a docker image with all of these installed and working together?
Perhaps building my own isn't as difficult as I think, but it's alluded me thus far.
Any and all advice appreciated.
Try https://github.com/SeleniumHQ/docker-selenium.
It has python installed:
$ docker run selenium/standalone-chrome python3 --version
Python 3.5.2
The instructions indicate you start it with
docker run -d -p 4444:4444 --shm-size=2g selenium/standalone-chrome
Edit:
To allow selenium to run through python it appears you need to install the packages. Create this Dockerfile:
FROM selenium/standalone-chrome
USER root
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN python3 -m pip install selenium
Then you could run it with
docker build . -t selenium-chrome && \
docker run -it selenium-chrome python3
The advantage compared to the plain python docker image is that you won't need to install the chromedriver itself since it comes from selenium/standalone-chrome.
I like Harald's solution.
However, as of 2021, my environment needed some modifications.
Docker version 20.10.5, build 55c4c88
I changed the Dockerfile as follows.
FROM selenium/standalone-chrome
USER root
RUN apt-get update && apt-get install python3-distutils -y
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN python3 -m pip install selenium
https://hub.docker.com/r/joyzoursky/python-chromedriver/
It uses python3 as base image and install chromedriver, chrome and selenium (as a pip package) to build. I used the alpine based python3 version for myself, as the image size is smaller.
$ cd [your working directory]
$ docker run -it -v $(pwd):/usr/workspace joyzoursky/python-chromedriver:3.6-alpine3.7-selenium sh
/ # cd /usr/workspace
See if the images suit your case, as you could pip install selenium with other packages together by a requirements.txt file to build your own image, or take reference from the Dockerfiles of this.
If you want to pip install more packages apart from selenium, you could build your own image as this example:
First, in your working directory, you may have a requirements.txt storing the package versions you want to install:
selenium==3.8.0
requests==2.18.4
urllib3==1.22
... (your list of packages)
Then create the Dockerfile in the same directory like this:
FROM joyzoursky/python-chromedriver:3.6-alpine3.7
RUN mkdir packages
ADD requirements.txt packages
RUN pip install -r packages/requirements.txt
Then build the image:
docker build -t yourimage .
This differs with the selenium official one as selenium is installed as a pip package to a python base image. Yet it is hosted by individual so may have higher risk of stopping maintenance.

Install cx_Oracle for Python

Am on Debian 5, I've been trying to install cx_oracle module for python without any success. First, I installed oracle-xe-client and its dependency (followed tutorial in the following link here).
Then, I used the scripts in /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/bin to populate environment variables such as PATH, ORACLE_HOME and NLS_LANG.
Once, this was completed, I tried to run:
sudo easy_install cx_oracle
But I keep getting the following error:
Searching for cx-oracle
Reading http://pypi.python.org/simple/cx_oracle/
Reading http://cx-oracle.sourceforge.net
Reading http://starship.python.net/crew/atuining
Best match: cx-Oracle 5.0.4
Downloading http://prdownloads.sourceforge.net/cx-oracle/cx_Oracle-5.0.4.tar.gz?download
Processing cx_Oracle-5.0.4.tar.gz
Running cx_Oracle-5.0.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install-xsylvG/cx_Oracle-5.0.4/egg-dist-tmp-8KoqIx
error: cannot locate an Oracle software installation
Any idea what I missed here?
The alternate way, that doesn't require RPMs. You need to be root.
Dependencies
Install the following packages:
apt-get install python-dev build-essential libaio1
Download Instant Client for Linux x86-64
Download the following files from Oracle's download site:
Extract the zip files
Unzip the downloaded zip files to some directory, I'm using:
/opt/ora/
Add environment variables
Create a file in /etc/profile.d/oracle.sh that includes
export ORACLE_HOME=/opt/ora/instantclient_11_2
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
Create a file in /etc/ld.so.conf.d/oracle.conf that includes
/opt/ora/instantclient_11_2
Execute the following command
sudo ldconfig
Note: you may need to reboot to apply settings
Create a symlink
cd $ORACLE_HOME
ln -s libclntsh.so.11.1 libclntsh.so
Install cx_Oracle python package
You may install using pip
pip install cx_Oracle
Or install manually
Download the cx_Oracle source zip that corresponds with your Python and Oracle version. Then expand the archive, and run from the extracted directory:
python setup.py build
python setup.py install
I recommend that you grab the rpm files and install them with alien. That way, you can later on run apt-get purge no-longer-needed.
In my case, the only env variable I needed is LD_LIBRARY_PATH, so I did:
echo export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client/lib >> ~/.bashrc
source ~/.bashrc
I suppose in your case that path variable will be /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/lib.
The following worked for me, both on mac and Linux. This one command should download needed additional files, without need need to set environment variables.
python -m pip install cx_Oracle --pre
Note, the --pre option is for development and pre-release of the Oracle driver. As of this posting, it was grabbing cx_Oracle-6.0rc1.tar.gz, which was needed. (I'm using python 3.6)
Thx Burhan Khalid, I overlooked your "You need to be root" quote, but found the way when you are not the root here.
At point 7 you need to use:
sudo env ORACLE_HOME=$ORACLE_HOME python setup.py install
Or
sudo env ORACLE_HOME=/path/to/instantclient python setup.py install
Thanks Burhan Khalid. Your advice to make a a soft link make my installation finally work.
To recap:
You need both the basic version and the SDK version of instant client
You need to set both LD_LIBRARY_PATH and ORACLE_HOME
You need to create a soft link (ln -s libclntsh.so.12.1 libclntsh.so in my case)
None of this is documented anywhere, which is quite unbelievable and quite frustrating. I spent over 3 hours yesterday with failed builds because I didn't know to create a soft link.
I think it may be the sudo has no access to get ORACLE_HOME.You can do like this.
sudo visudo
modify the text add
Defaults env_keep += "ORACLE_HOME"
then
sudo python setup.py build install
Alternatively you can install the cx_Oracle module without the PIP using the following steps
Download the source from here https://pypi.python.org/pypi/cx_Oracle
[cx_Oracle-6.1.tar.gz ]
Extract the tar using the following commands (Linux)
gunzip cx_Oracle-6.1.tar.gz
tar -xf cx_Oracle-6.1.tar
cd cx_Oracle-6.1
Build the module
python setup.py build
Install the module
python setup.py install
This just worked for me on Ubuntu 16:
Download ('instantclient-basic-linux.x64-12.2.0.1.0.zip' and 'instantclient-sdk-linux.x64-12.2.0.1.0.zip') from Oracle web site and then do following script (you can do piece by piece and I did as a ROOT):
apt-get install -y python-dev build-essential libaio1
mkdir -p /opt/ora/
cd /opt/ora/
## Now put 2 ZIP files:
# ('instantclient-basic-linux.x64-12.2.0.1.0.zip' and 'instantclient-sdk-linux.x64-12.2.0.1.0.zip')
# into /opt/ora/ and unzip them -> both will be unzipped into 1 directory: /opt/ora/instantclient_12_2
rm -rf /etc/profile.d/oracle.sh
echo "export ORACLE_HOME=/opt/ora/instantclient_12_2" >> /etc/profile.d/oracle.sh
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME" >> /etc/profile.d/oracle.sh
chmod 777 /etc/profile.d/oracle.sh
source /etc/profile.d/oracle.sh
env | grep -i ora # This will check current ENVIRONMENT settings for Oracle
rm -rf /etc/ld.so.conf.d/oracle.conf
echo "/opt/ora/instantclient_12_2" >> /etc/ld.so.conf.d/oracle.conf
ldconfig
cd $ORACLE_HOME
ls -lrth libclntsh* # This will show which version of 'libclntsh' you have... --> needed for following line:
ln -s libclntsh.so.12.1 libclntsh.so
pip install cx_Oracle # Maybe not needed but I did it anyway (only pip install cx_Oracle without above steps did not work for me...)
Your python scripts are now ready to use 'cx_Oracle'... Enjoy!
This worked for me
python -m pip install cx_Oracle --upgrade
For details refer to the oracle quick start guide
https://cx-oracle.readthedocs.io/en/latest/installation.html#quick-start-cx-oracle-installation
If you are trying to install in MAC , just unzip the Oracle client which you downloaded and place it into the folder where you written python scripts.
it will start working.
There is too much problem of setting up environmental variables.
It worked for me.
Hope this helps.
Thanks
Try to reinstall it with the following code:
!pip install --proxy http://username:windowspwd#10.200.72.2:8080 --upgrade --force-reinstall cx_Oracle
If you require to install a specific version of cx_Oracle, like 7.3 which was the last version with support for Python 2, you can do the following:
python -m pip install cx_Oracle==7.3

Categories

Resources