I try to use pjsua module for Python on Ubuntu 16.04.
When I try to call AccountConfig, it will return with following error message:
>>> import pjsua
>>> t=pjsua.AccountConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pjsua.py", line 802, in __init__
self._cvt_from_pjsua(default)
File "pjsua.py", line 859, in _cvt_from_pjsua
for cred in cfg.cred_info:
MemoryError
Compiled pjsip with following:
sudo apt-get install build-essential python-dev libpjsua2
wget http://www.pjsip.org/release/2.7.2/pjproject-2.7.2.tar.bz2
sudo rm -fr pjproject-2.7.2
tar -xf pjproject-2.7.2.tar.bz2 && cd pjproject-2.7.2/
export CFLAGS="$CFLAGS -fPIC"
./configure --enable-shared --disable-sound && make dep && make
cd pjsip-apps/src/python/
sudo python setup.py install
I would appreciate any idea, what am I doing wrong.
Compile with following solved the issue:
sudo apt-get update
sudo apt-get -y install build-essential python-dev libpjsua2 libssl-dev libasound2-dev
wget http://www.pjsip.org/release/2.7.2/pjproject-2.7.2.tar.bz2
tar -xf pjproject-2.7.2.tar.bz2 && cd pjproject-2.7.2/
export CFLAGS="$CFLAGS -fPIC"
./configure && make dep && make
cd pjsip-apps/src/python/
sudo python setup.py install
Related
I'm using Cloudera Hive ODBC driver in my code and I'm trying to containerize the app.
Below is my Dockerfile,
FROM ubuntu:18.04
FROM continuumio/anaconda3
FROM node:10
RUN conda update -n base -c defaults conda
RUN conda create -n env python=3.7
RUN echo "conda activate env" > ~/.bashrc
ENV PATH /opt/conda/envs/env/bin:$PATH
RUN apt-get update && apt-get install -y \
curl apt-utils apt-transport-https debconf-utils gcc build-essential \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
python-pip python-dev python-setuptools \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip
RUN pip install pyyaml pandas numpy pymysql sqlalchemy schedule tornado
RUN apt-get update && apt-get install -y --no-install-recommends git unzip unixodbc unixodbc-dev
RUN conda install -c conda-forge turbodbc=3.1.1
RUN apt-get update && apt-get install -y gettext nano vim -y
RUN yarn install --modules-folder ./static
WORKDIR /app
COPY entry.sh /usr/local/bin/
COPY . /app/
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY entry.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/entry.sh
EXPOSE 5000 2222 22 80 8000
CMD ["entry.sh"]
Building Image is getting successful, but I see when I run the docker image, I see below error
Traceback (most recent call last):
File "app.py", line 14, in <module>
from abc_scheduler import scheduler_main
File "/app/abc_scheduler.py", line 5, in <module>
from methods import Methods
File "/app/methods.py", line 10, in <module>
from utils import *
File "/app/utils.py", line 2, in <module>
from turbodbc import connect, make_options
ModuleNotFoundError: No module named 'turbodbc'
I have tried many other ODBC's inside my Dockerfile, but no luck. Any help would be great.
As suggested by #DavidMaze, I managed create a successful Dockerfile & is shown below
FROM ubuntu:latest
FROM continuumio/anaconda3
FROM node:10
RUN conda update -n base -c defaults conda
RUN conda create -n env python=3.7
RUN echo 'conda init bash' >/.bashrc
RUN echo "conda activate env" > ~/.bashrc
ENV PATH /opt/conda/envs/env/bin:$PATH
RUN apt-get update && apt-get install -y \
curl apt-utils apt-transport-https debconf-utils gcc build-essential \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y \
python-pip python-dev python-setuptools \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip
# ==================TURBODBC========================
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get dist-upgrade -y
RUN apt-get install -y alien # optional
COPY ClouderaHiveODBC-2.6.1.1001-1.x86_64.rpm /opt/cloudera/
RUN alien /opt/cloudera/ClouderaHiveODBC-2.6.1.1001-1.x86_64.rpm
RUN dpkg -i clouderahiveodbc_2.6.1.1001-2_amd64.deb
# ==================END=============================
RUN conda install --name env -c conda-forge turbodbc=4.1.1 tornado=6.0.4 pyyaml pymysql schedule sqlalchemy pyarrow numpy=1.19.3\
pandas=1.1.4 pybind11 pyarrow
COPY odbc.ini /etc/
RUN apt-get update && apt-get install -y gettext nano vim -y
RUN yarn install --modules-folder ./static
WORKDIR /app
COPY . /app/
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY entry.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/entry.sh
EXPOSE 9988 2222 22 80 8000
CMD ["entry.sh"]
Keep a copy of ClouderaHiveODBC-2.6.1.1001-1.x86_64.rpm in the current directory
Keep the below files as well :
odbc.ini - which has the DB info
entry.sh - which is shell script and has a command - python app.py
ssh_config - file without any extension has the information as shown below
Port 2222
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PrintMotd no
IgnoreRhosts no
#deprecated option
#RhostsAuthentication no
RhostsRSAAuthentication yes
RSAAuthentication no
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
I want to expand the answer by showing an approach that works without conda being necessary. In other words, a full-pip minimum viable docker setup for installing turbodbc. I've fully documented the solution in this Github comment in the official turbodbc repo.
I've tried installing Pyside2-uic using :
sudo apt-get install -y python-pyside2
sudo apt-get install pyside2-tools
while I'm converting the .ui file to .py using pyside2-uic still i get error as " ImportError: No module named pyside2uic.driver "
Traceback (most recent call last):
File "/usr/bin/pyside2-uic", line 28, in <module>
from pyside2uic.driver import Driver
ImportError: No module named pyside2uic.driver
How to resolve this error
I tried these commands and it worked.
$ apt-get install wget python-pip python-dev software-properties-common
$ add-apt-repository ppa:beineri/opt-qt561-trusty
$ apt-get update
$ apt-get install qt56-meta-full
$ . /opt/qt56/bin/qt56-env.sh
$ wget https://bintray.com/fredrikaverpil/pyside2-wheels/download_file?file_path=ubuntu14.04%2FPySide2-2.0.0.dev0-cp27-none-linux_x86_64.whl -O PySide2-2.0.0.dev0-cp27-none-linux_x86_64.whl
$ pip install PySide2-2.0.0.dev0-cp27-none-linux_x86_64.whl
$ pyside2-uic
I trying to install Python GDAL/OGR bindings to be accessible directly from Python interpreter on Docker python:3.6-stretch image.
My Dockerfile looks like that:
FROM python:3.6-stretch
ENV PYTHONUNBUFFERED 1
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update && apt-get install -y \
binutils \
libproj-dev \
gdal-bin \
libgdal-dev \
python3-gdal \
python3-pip \
python-numpy \
python-dev \
vim
COPY . /app
RUN pip3 install --no-cache-dir -r /app/requirements.txt \
&& rm -rf /requirements.txt
WORKDIR /app
Dockerfile installs current stable version of GDAL and python3-gdal which is 2.1.2.
Import osgeo from Python interpreter gives me an error:
>>> from osgeo import gdal
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'osgeo'
How to install neccessary libs properly?
Check if you are using correct python 3 interpreter. You can call in linux system:
whereis python3
You had install python3-pip package, so you have osgeo in your main python 3 in /usr/bin/python3. In other python 3 locations osgeo can be not available.
I solved this problem by installing via Python PIP pygdal package. Firstly you need to check Gdal version installed on the machine, and install proper pygdal.
$ gdalinfo --version
GDAL 2.1.3, released 2017/20/01
$ pip install "pygda>=2.1.2,<2.1.3"
FROM ubuntu:14.04.2
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -y update && apt-get upgrade -y
RUN apt-get install python build-essential python-dev python-pip python-setuptools -y
RUN apt-get install libxml2-dev libxslt1-dev python-dev -y
RUN apt-get install libpq-dev postgresql-common postgresql-client -y
RUN apt-get install openssl openssl-blacklist openssl-blacklist-extra -y
RUN apt-get install nginx -y
RUN pip install "pip>=7.0"
RUN pip install virtualenv uwsgi
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
RUN virtualenv /home/ubuntu/canonicaliser_api/venv
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
RUN export CFLAGS=-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && \
python /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/setup.py \
build_ext --inplace
The last line crashes with:
Traceback (most recent call last):
File "/home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/setup.py", line 5, in <module>
ext_modules = cythonize("*.pyx")
File "/home/ubuntu/canonicaliser_api/venv/local/lib/python2.7/site-packages/Cython/Build/Dependencies.py", line 754, in cythonize
aliases=aliases)
File "/home/ubuntu/canonicaliser_api/venv/local/lib/python2.7/site-packages/Cython/Build/Dependencies.py", line 649, in create_extension_list
for file in nonempty(extended_iglob(filepattern), "'%s' doesn't match any files" % filepattern):
File "/home/ubuntu/canonicaliser_api/venv/local/lib/python2.7/site-packages/Cython/Build/Dependencies.py", line 103, in nonempty
raise ValueError(error_msg)
ValueError: '*.pyx' doesn't match any files
...
What am I missing please?
I found the problem.
I had to downgrade to Cython 0.21 (This is main reason for the
error in question)
Afterwards I got another problem as the script didn't generate
anything, despite not throwing any errors. Solution to that was to be in that directory before executing it.
e.g.:
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && cd /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/ && python setup.py build_ext --inplace
The painful part about Docker seems that you have to chain all commmands as it seems to be stateless.
This only happens for me in Travis under the pypy build.
Here's the exact error string:
Traceback (most recent call last):
File "app_main.py", line 75, in run_toplevel
File "app_main.py", line 581, in run_it
File "<string>", line 1, in <module>
File "tests/test_pycouchbase.py", line 15, in <module>
from pycouchbase.utils import *
File "pycouchbase/__init__.py", line 8, in <module>
from .connection import Connection
File "pycouchbase/connection.py", line 3, in <module>
from couchbase.bucket import Bucket
File "/home/travis/virtualenv/pypy-2.5.0/site-packages/couchbase/__init__.py", line 28, in <module>
from couchbase.user_constants import *
File "/home/travis/virtualenv/pypy-2.5.0/site-packages/couchbase/user_constants.py", line 21, in <module>
import couchbase._bootstrap
File "/home/travis/virtualenv/pypy-2.5.0/site-packages/couchbase/_bootstrap.py", line 34, in <module>
import couchbase.exceptions as E
File "/home/travis/virtualenv/pypy-2.5.0/site-packages/couchbase/exceptions.py", line 18, in <module>
import couchbase._libcouchbase as C
ImportError: No module named couchbase._libcouchbase
I'm already trying to install couchbase_cffi, but it looks like _libcouchbase.so file is still missing.
Link to build: https://travis-ci.org/ardydedase/pycouchbase/jobs/75973023#L1782
Travis config file:
# Config file for automatic testing at travis-ci.org
language: python
python:
- "3.4"
- "3.3"
- "2.7"
- "2.6"
- "pypy"
before_install:
- sudo rm -rf /etc/apt/sources.list.d/*
- sudo add-apt-repository -y ppa:pypy/ppa
- wget -O- http://packages.couchbase.com/ubuntu/couchbase.key | sudo apt-key add -
- echo deb http://packages.couchbase.com/ubuntu precise precise/main | sudo tee /etc/apt/sources.list.d/couchbase.list
- sudo apt-get update
- sudo apt-cache search libcouchbase
install:
# GCC
- sudo apt-get install python-software-properties
- sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
- sudo apt-get update
- sudo apt-get -y install gcc-4.8
- sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 50
# libcouchbase dependencies
- sudo apt-get -y install libxml2-dev libxslt-dev python-all-dev libffi6 libffi-dev
- sudo apt-get -y install build-essential libssl-dev python-openssl
- sudo apt-get -y install libcouchbase-dev libcouchbase2-core libcouchbase2-libevent libevent-dev python-gevent
- pip -q install gevent || echo "Couldn't find gevent"
- pip -q install twisted
- pip -q install testresources
- pip install -r requirements.txt
# command to run tests, e.g. python setup.py test
script:
# - cd couchbase-python-cffi
# - export CFLAGS=-Qunused-arguments
# - export CPPFLAGS=-Qunused-arguments
# - python setup.py test
# - python setup.py build
- echo $PWD
# - if [[ $TRAVIS_PYTHON_VERSION == pypy ]]; then git clone https://github.com/couchbase/couchbase-python-client.git && cd couchbase-python-client && python setup.py build_ext --inplace && cd ..; fi
- if [[ $TRAVIS_PYTHON_VERSION == pypy ]]; then cd couchbase-python-cffi && python setup.py build && python setup.py install && cd .. && ls -al; fi
- if [[ $TRAVIS_PYTHON_VERSION == pypy ]]; then ls -al /home/travis/virtualenv/pypy-2.5.0/site-packages/couchbase; fi
- python -c "from tests import test_pycouchbase; print(test_pycouchbase)"
- python runtests.py
I did try to refer to this thread: https://forums.couchbase.com/t/installing-couchbase-1-0-0-on-ubuntu/291, but I can't find the build folder that is being referred in there.
If using the cffi module, you must import couchbase_ffi before anything else. The reason is that couchbase_ffi injects itself as the couchbase._libcouchbase module.
Under the "normal" extension, couchbase._libcouchbase contains the normal CPython extension code which is built. Since CPyext doesn't really work under pypy, the building of the code is disabled on that platform, and you are required to "inject" the ffi module beforehand.
Admittedly it's an annoying step and not the most 'transparent'. You might be able to do something like.. (untested!!!): try import couchbase; except ImportError: import couchbase_ffi