error: cannot locate an Oracle software installation - python

I'm working on Plone.
PRELUDE
I've installed:
oracle-instantclient12.1-basic-12.1.0.1.0-1.x86_64.rpm
oracle-instantclient12.1-devel-12.1.0.1.0-1.x86_64.rpm
oracle-instantclient12.1-sqlplus-12.1.0.1.0-1.x86_64.rpm
and also cx_Oracle.
I've tested the installations and it's all ok: db connection successfully.
echo $ORACLE_HOME
/usr/lib/oracle/12.1/client64
echo $TNS_ADMIN
/usr/lib/oracle/12.1/client64/admin
echo $LD_LIBRARY_PATH
/usr/lib/oracle/12.1/client64/lib
THE PROBLEM
I've edited buildout.cfg as follows:
[...]
eggs =
Plone
Pillow
collective.documentviewer
Products.OpenXml
Products.AROfficeTransforms
tus
wildcard.foldercontents==2.0a7
**cx_Oracle**
[...]
I receive this error:
Unused options for buildout: 'environment-vars'.
Installing instance.
Getting distribution for 'cx-Oracle'.
error: cannot locate an Oracle software installation
An error occurred when trying to install cx-Oracle 5.1.3. Look above this message for any errors that were output by easy_install.
While:
Installing instance.
Getting distribution for 'cx-Oracle'.
Error: Couldn't install: cx-Oracle 5.1.3
I have no idea how to solve this.
"cannot locate an Oracle software installation" How to fix this?

Got the same problem, background is:
echo $ORACLE_HOME
/usr/lib/oracle/12.1/client64
But:
sudo env | grep ORACLE_HOME
yields nothing.
The solution:
sudo visudo
Then add the line :
Defaults env_keep += "ORACLE_HOME"
As found here

You must be sure that the right envvars are setted for the user that run the Plone instance.
The best way is to add those vars in the buildout configuration::
[buildout]
...
[instance]
...
environment-vars =
...
LD_LIBRARY_PATH /usr/lib/oracle/10.2.0.3/client64/lib
ORACLE_HOME /usr/lib/oracle/10.2.0.3/client64
(This is what I have on a CentOS installation)

Related

Django: dbbackup displays pg_dump: error: too many command-line arguments

I've installed the django-dbbackup package and from what the Documentation tells, i need to run python manage.py dbbackup
but it generated error pg_dump: error: too many command-line arguments
from what i have seen in the logs
dbbackup.db.exceptions.CommandConnectorError: Error running: pg_dump database_name --host=127.0.0.1 --port=5432 --username=postgres --no-password --clean
From what i have known, the correct command for pg_dump is to include the database name in the last part but the dbbackup include the database name first.
Anyone know the fix for the Django-dbbackup?
I had the same error. What I did was
Made sure pg_dump was in the environment variable in my system.
pip uninstall dbbackup
pip install django-dbbackup --upgrade
And it worked!
I have same issue with you, this is my env
Window 10
postgres10
django 2.2.9
django-dbbackup 3.2.0
I can manual run successfully like below, add "--dbname" parm name.
pg_dump --dbname=database_name --host=127.0.0.1 --port=5432 --username=postgres --no-password --clean
I don't know how to override the command by create a new method, so I changed the source code in dbbackup package directly, it works.
file "\Lib\site-packages\dbbackup\db\postgresql.py"
from:
cmd = '{} --dbname={}'.format(self.dump_cmd, self.settings['NAME'])
to:
cmd = '{} {}'.format(self.dump_cmd, self.settings['NAME'])

JupyterLab build is suggested and successfully installed, but will not work. Why?

I'm running JupyterLab from Anaconda, and installed a JupyterLab plotly extension using:
conda install -c conda-forge jupyterlab-plotly-extension
Apparently, the installation was successful, but something is still wrong.
When launching JuyterLab, I'm getting this prompt:
Clicking BUILD gives me this:
And clicking RELOAD relods JupyterLab, BUT I'm getting this again:
And on and on it spins. Does anyone know why?
Clicking CANCEL does not help either because plotly won't produce any plots, only blank spaces:
Solution:
Deactivate firewall and run the following command in a windows command prompt:
jupyter lab build
The details:
This turned out to be a firewall problem, and I'm not sure why it would not be prompted as such in the JupyterLab interface. The following command in a windows command prompt returned the error message below:
Command:
jupyter lab build
Output:
C:>jupyter labextension list JupyterLab v0.34.9 Known labextensions:
app dir:
C:\Users*******\AppData\Local\Continuum\anaconda3\share\jupyter\lab
#jupyterlab/plotly-extension v0.18.2 enabled ok
Build recommended, please run jupyter lab build:
#jupyterlab/plotly-extension needs to be included in build
C:>jupyter lab build [LabBuildApp] JupyterLab 0.34.9 [LabBuildApp]
Building in
C:\Users*******\AppData\Local\Continuum\anaconda3\share\jupyter\lab
[LabBuildApp] > node
C:\Users*******\AppData\Local\Continuum\anaconda3\lib\site-packages\jupyterlab\staging\yarn.js
install yarn install v1.9.4 info No lockfile found. [1/4] Resolving
packages... error An unexpected error occurred:
"https://registry.yarnpkg.com/#jupyterlab%2fapplication: self signed
certificate in certificate chain". info If you think this is a bug,
please open a bug report with the information provided in
"C:\Users\*******\AppData\Local\Continuum\anaconda3\share\jupyter\lab\staging\yarn-error.log".
What pointed me towards suspecting a firewall problem was this part:
self signed certificate in certificate chain
Running the same command on less rigid fire-wall settings triggers this output (shortened):
WARNING in d3-array Multiple versions of d3-array found:
1.2.4 ./~/d3-scale/~/d3-array from ./~/d3-scale/~/d3-array\src\index.js
2.2.0 ./~/d3-array from ./~/d3-array\src\index.js
Check how you can resolve duplicate packages:
https://github.com/darrenscerri/duplicate-package-checker-webpack-plugin#resolving-duplicate-packages-in-your-bundle
Child html-webpack-plugin for "index.html":
1 asset
Entrypoint undefined = index.html
[KTNU] ./node_modules/html-loader!./templates/partial.html 567 bytes {0} [built]
[YuTi] (webpack)/buildin/module.js 497 bytes {0} [built]
[aS2v] ./node_modules/html-webpack-plugin/lib/loader.js!./templates/template.html
1.22 KiB {0} [built]
[yLpj] (webpack)/buildin/global.js 489 bytes {0} [built]
+ 1 hidden module
And despite some warning messages, JupyterLab now produces plotly figures without any problems:

Provisioning CoreOS with Ansible pip error

I am trying to provision a coreOS box using Ansible. First a bootstapped the box using https://github.com/defunctzombie/ansible-coreos-bootstrap
This seems to work ad all but pip (located in /home/core/bin) is not added to the path. In a next step I am trying to run a task that installs docker-py:
- name: Install docker-py
pip: name=docker-py
As pip's folder is not in path I did it using ansible:
environment:
PATH: /home/core/bin:$PATH
If I am trying to execute this task I get the following error:
fatal: [192.168.0.160]: FAILED! => {"changed": false, "cmd": "/home/core/bin/pip install docker-py", "failed": true, "msg": "\n:stderr: /home/core/bin/pip: line 2: basename: command not found\n/home/core/bin/pip: line 2: /root/pypy/bin/: No such file or directory\n"}
what I ask is where does /root/pypy/bin/ come from it seems this is the problem. Any idea?
You can't use shell-style variable expansion when setting Ansible variables. In this statement...
environment:
PATH: /home/core/bin:$PATH
...you are setting your PATH environment variable to the literal value /home/core/bin:$PATH. In other words, you are blowing away any existing value of $PATH, which is why you're getting "command not found" errors for basic things like basename.
Consider installing pip somewhere in your existing $PATH, modifying $PATH before calling ansible, or calling pip from a shells cript:
- name: install something with pip
shell: |
PATH="/home/core/bin:$PATH"
pip install some_module
The problem lies in /home/core/bin/pip script which is literally:
#!/bin/bash
LD_LIBRARY_PATH=$HOME/pypy/lib:$LD_LIBRARY_PATH $HOME/pypy/bin/$(basename $0) $#
when run under root by ansible the $HOME variable is substituted with /root and not with /home/core.
Change $HOME with /home/core and it should work.

Problems connecting with Teradata database using pyodbc

Currently, I'm running a simple python script to connect to a database:
import pyodbc
cnxn = pyodbc.connect('DRIVER={Teradata};DBCNAME=(MYDB);UID=(MYUSER); PWD=(MYPASS);QUIETMODE=YES')
With the server and credentials substituted in obviously. However, when running this script, I get the following error:
pyodbc.Error: ('200', '[200] [unixODBC][eaaa[DCTrdt rvr o nuhifraint o n (0) (SQLDriverConnectW)')
The only help I've been able to find is here installed the Teradata ODBC drivers, but I just don't understand why I can't connect. Anyone have any ideas on this?
You have to use like following to connect:
TDCONN = pyodbc.connect('DSN=yourDSNname;',ansi=True, autocommit=True)
You can replace DSN=yourDSNname; with what you have already.
I hit this "non english" error message problem. I think it was due to the wrong versions of libodbc.so and libodbcinst.so being used. Changing the links in /usr/lib/... to point to the teradata installed versions worked for me. Commands using the default install directories in Ubuntu 12.04, 64bit were:
cd /usr/lib/x86_64-linux-gnu
ls -lha | grep odbc (To should see the files below which are to be replaced).
sudo mv libodbc.so.1.0.0 Xlibodbc.so.1.0.0
sudo ln -s /opt/teradata/client/14.10/odbc_64/lib/libodbc.so libodbc.so.1.0.0
sudo mv libodbcinst.so.1.0.0 Xlibodbcinst.so.1.0.0
sudo ln -s /opt/teradata/client/14.10/odbc_64/lib/libodbcinst.so libodbcinst.so.1.0.0
I had also previously installed (through apt-get) odbcinst, so I redirected both files. But possibly only the first is needed.

Python executable not finding libpython shared library

I am installing Python 2.7 on CentOS 5. I built and installed Python as follows
./configure --enable-shared --prefix=/usr/local
make
make install
When I try to run /usr/local/bin/python, I get this error message
/usr/local/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
When I run ldd on /usr/local/bin/python, I get
ldd /usr/local/bin/python
libpython2.7.so.1.0 => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x00000030e9a00000)
libdl.so.2 => /lib64/libdl.so.2 (0x00000030e9200000)
libutil.so.1 => /lib64/libutil.so.1 (0x00000030fa200000)
libm.so.6 => /lib64/libm.so.6 (0x00000030e9600000)
libc.so.6 => /lib64/libc.so.6 (0x00000030e8e00000)
/lib64/ld-linux-x86-64.so.2 (0x00000030e8a00000)
How do I tell Python where to find libpython?
Try the following:
LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python
Replace /usr/local/lib with the folder where you have installed libpython2.7.so.1.0 if it is not in /usr/local/lib.
If this works and you want to make the changes permanent, you have two options:
Add export LD_LIBRARY_PATH=/usr/local/lib to your .profile in your home directory (this works only if you are using a shell which loads this file when a new shell instance is started). This setting will affect your user only.
Add /usr/local/lib to /etc/ld.so.conf and run ldconfig. This is a system-wide setting of course.
Putting on my gravedigger hat...
The best way I've found to address this is at compile time. Since you're the one setting prefix anyway might as well tell the executable explicitly where to find its shared libraries. Unlike OpenSSL and other software packages, Python doesn't give you nice configure directives to handle alternate library paths (not everyone is root you know...) In the simplest case all you need is the following:
./configure --enable-shared \
--prefix=/usr/local \
LDFLAGS="-Wl,--rpath=/usr/local/lib"
Or if you prefer the non-linux version:
./configure --enable-shared \
--prefix=/usr/local \
LDFLAGS="-R/usr/local/lib"
The "rpath" flag tells python it has runtime libraries it needs in that particular path. You can take this idea further to handle dependencies installed to a different location than the standard system locations. For example, on my systems since I don't have root access and need to make almost completely self-contained Python installs, my configure line looks like this:
./configure --enable-shared \
--with-system-ffi \
--with-system-expat \
--enable-unicode=ucs4 \
--prefix=/apps/python-${PYTHON_VERSION} \
LDFLAGS="-L/apps/python-${PYTHON_VERSION}/extlib/lib -Wl,--rpath=/apps/python-${PYTHON_VERSION}/lib -Wl,--rpath=/apps/python-${PYTHON_VERSION}/extlib/lib" \
CPPFLAGS="-I/apps/python-${PYTHON_VERSION}/extlib/include"
In this case I am compiling the libraries that python uses (like ffi, readline, etc) into an extlib directory within the python directory tree itself. This way I can tar the python-${PYTHON_VERSION} directory and land it anywhere and it will "work" (provided you don't run into libc or libm conflicts). This also helps when trying to run multiple versions of Python on the same box, as you don't need to keep changing your LD_LIBRARY_PATH or worry about picking up the wrong version of the Python library.
Edit: Forgot to mention, the compile will complain if you don't set the PYTHONPATH environment variable to what you use as your prefix and fail to compile some modules, e.g., to extend the above example, set the PYTHONPATH to the prefix used in the above example with export PYTHONPATH=/apps/python-${PYTHON_VERSION}...
I had the same problem and I solved it this way:
If you know where libpython resides at, I supposed it would be /usr/local/lib/libpython2.7.so.1.0 in your case, you can just create a symbolic link to it:
sudo ln -s /usr/local/lib/libpython2.7.so.1.0 /usr/lib/libpython2.7.so.1.0
Then try running ldd again and see if it worked.
I installed Python 3.5 by Software Collections on CentOS 7 minimal. It all worked fine on its own, but I saw the shared library error mentioned in this question when I tried running a simple CGI script:
tail /var/log/httpd/error_log
AH01215: /opt/rh/rh-python35/root/usr/bin/python: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
I wanted a systemwide permanent solution that works for all users, so that excluded adding export statements to .profile or .bashrc files. There is a one-line solution, based on the Red Hat solutions page. Thanks for the comment that points it out:
echo 'source scl_source enable rh-python35' | sudo tee --append /etc/profile.d/python35.sh
After a restart, it's all good on the shell, but sometimes my web server still complains. There's another approach that always worked for both the shell and the server, and is more generic. I saw the solution here and then realized it's actually mentioned in one of the answers here as well! Anyway, on CentOS 7, these are the steps:
vim /etc/ld.so.conf
Which on my machine just had:
include ld.so.conf.d/*.conf
So I created a new file:
vim /etc/ld.so.conf.d/rh-python35.conf
And added:
/opt/rh/rh-python35/root/usr/lib64/
And to manually rebuild the cache:
sudo ldconfig
That's it, scripts work fine!
This was a temporary solution, which didn't work across reboots:
sudo ldconfig /opt/rh/rh-python35/root/usr/lib64/ -v
The -v (verbose) option was just to see what was going on. I saw that it did:
/opt/rh/rh-python35/root/usr/lib64:
libpython3.so.rh-python35 -> libpython3.so.rh-python35
libpython3.5m.so.rh-python35-1.0 -> libpython3.5m.so.rh-python35-1.0
This particular error went away. Incidentally, I had to chown the user to apache to get rid of a permission error after that.
Note that I used find to locate the directory for the library. You could also do:
sudo yum install mlocate
sudo updatedb
locate libpython3.5m.so.rh-python35-1.0
Which on my VM returns:
/opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0
Which is the path I need to give to ldconfig, as shown above.
This worked for me...
$ sudo apt-get install python2.7-dev
On Solaris 11
Use LD_LIBRARY_PATH_64 to resolve symlink to python libs.
In my case for python3.6 LD_LIBRARY_PATH didn't work but LD_LIBRARY_PATH_64 did.
Hope this helps.
Regards
This answer would be helpful to those who have limited auth access on the server.
I had a similar problem for python3.5 in HostGator's shared hosting. Python3.5 had to be enabled every single damn time after login. Here are my 10 steps for resolution:
Enable the python through scl script python_enable_3.5 or scl enable rh-python35 bash.
Verify that it's enabled by executing python3.5 --version. This should give you your python version.
Execute which python3.5 to get its path. In my case, it was /opt/rh/rh-python35/root/usr/bin/python3.5. You can use this path get the version again (just to verify that this path is working for you.)
Awesome, now please exit out of current shell by scl.
Now, lets get the version again through this complete python3.5 path /opt/rh/rh-python35/root/usr/bin/python3.5 --version.
It won't give you the version but an error. In my case, it was
/opt/rh/rh-python35/root/usr/bin/python3.5: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
As mentioned in Tamas' answer, we gotta find that so file. locate doesn't work in shared hosting and you can't install that too.
Use the following command to find where that file is located:
find /opt/rh/rh-python35 -name "libpython3.5m.so.rh-python35-1.0"
Above command would print the complete path (second line) of the file once located. In my case, output was
find: `/opt/rh/rh-python35/root/root': Permission denied
/opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0
Here is the complete command for the python3.5 to work in such shared hosting which would give the version,
LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5 --version
Finally, for shorthand, append the following alias in your ~/.bashrc
alias python351='LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5'
For verification, reload the .bashrc by source ~/.bashrc and execute python351 --version.
Well, there you go, now whenever you login again, you have got python351 to welcome you.
This is not just limited to python3.5, but can be helpful in case of other scl installed softwares.
I installed using the command:
./configure --prefix=/usr \
--enable-shared \
--with-system-expat \
--with-system-ffi \
--enable-unicode=ucs4 &&
make
Now, as the root user:
make install &&
chmod -v 755 /usr/lib/libpython2.7.so.1.0
Then I tried to execute python and got the error:
/usr/local/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
Then, I logged out from root user and again tried to execute the Python and it worked successfully.
All it needs is the installation of libpython [3 or 2] dev files installation.
just install python-lib. (python27-lib). It will install libpython2.7.so1.0. We don't require to manually set anything.

Categories

Resources