clGetPlatformIDs failed: <unknown error -1001> - python

when i run following code
import pyopencl as cl
cl.get_platforms()
I get error
clGetPlatformIDs failed: <unknown error -1001>
I am running python 3.6 pyopencl 2018.1.1 on aws ec2 Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-116-generic x86_64).
I have tried following things , but none of them work:
echo libnvidia-opencl.so.1 >> /etc/OpenCL/vendors/nvidia.icd
from root directory by doing sudo -i
after ssh into ubuntu ec2 instance. (initially this command wont work so i removed nvidia.icd file {rm nvidia.icd}and then this command worked. but it did not solve the error 1001 mentioned above.
echo libnvidia-opencl.so.384.111 >> /etc/OpenCL/vendors/nvidia.icd
sudo ln -s /opt/intel/opencl-1.2-3.2.1.16712/etc/intel64.icd /etc/OpenCL/vendors/nvidia.icd
sudo usermod -aG video your-user-name
sudo ln -s /usr/share/nvidia-331/nvidia.icd /etc/OpenCL/vendors
sudo ln -s /usr/share/nvidia-384/nvidia.icd /etc/OpenCL/vendors
optirun myopenclprogram

The easiest way to use OpenCL on EC2 is by using the Deep Learning Base Image, which comes with all necessary drivers and is already configured to work with P2 and P3 instance types. The image can be found at https://aws.amazon.com/marketplace/pp/B077GCH38C.

Related

centos 7, compiled python 3.10, undefined symbol GENERAL_NAME_free

I'm getting an error trying to start apache which uses mod_wsgi 4.9.1 in a virtual environment.
On my centos 7 system, I compiled python 3.10, which is altinstalled to /usr/local/bin/python3.10
I have a virtualenv, but may have initially built mod_wsgi with an earlier version of python 3.10 (I was having trouble getting python to build correctly). However, my last attempt I used
pip install --ignore-installed --no-cache-dir mod_wsgi==4.9.1 --no-binary mod_wsgi
to try to get it to build with the latest python3.10 (I tried several pip installs, starting with just --no-cache-dir)
Note I am trying to upgrade several applications to python3.10, from centos7 yum installed python 3.6, mod_wsgi 4.7.0, which was working. So this likely has something to do with my python installation, or connecting mod_wsgi to the python installation.
I tried running mod_wsgi-express setup-server, and get these results
$ sudo LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/custom-openssl/lib /root/bin/init-mod_wsgi-express runningroutes sandbox.routes.loutilities.com routesmgr routesmgr 8002
Server URL : http://proxysvr.loutilities.com:8002/
Server Root : /etc/mod_wsgi-express/sandbox.routes.loutilities.com
Server Conf : /etc/mod_wsgi-express/sandbox.routes.loutilities.com/httpd.conf
Error Log File : /etc/mod_wsgi-express/sandbox.routes.loutilities.com/error_log (warn)
Rewrite Rules : /etc/mod_wsgi-express/sandbox.routes.loutilities.com/rewrite.conf
Environ Variables : /etc/mod_wsgi-express/sandbox.routes.loutilities.com/envvars
Control Script : /etc/mod_wsgi-express/sandbox.routes.loutilities.com/apachectl
Operating Mode : daemon
Request Capacity : 5 (1 process * 5 threads)
Request Timeout : 60 (seconds)
Startup Timeout : 15 (seconds)
Queue Backlog : 100 (connections)
Queue Timeout : 45 (seconds)
Server Capacity : 20 (event/worker), 20 (prefork)
Server Backlog : 500 (connections)
Locale Setting : en_US.UTF-8
where
$ sudo cat /root/bin/init-mod_wsgi-express
#!/bin/bash
if [[ $# -lt 5 ]] ; then
echo "usage:"
echo " init-mod_wsgi-express project servername user group port"
exit 0
fi
source /var/www/$2/venv/bin/activate
mod_wsgi-express setup-server --server-name proxysvr.loutilities.com --port $5 --user $3 --group $4 /var/www/$2/$1/$1/$1.wsgi --working-directory /var/www/$2/$1/$1/ --server-root /etc/mod_wsgi-express/$2
deactivate
but I see an error when trying to start the created apachectl file
$ sudo /etc/mod_wsgi-express/sandbox.routes.loutilities.com/apachectl start
httpd (mod_wsgi-express): Syntax error on line 163 of /etc/mod_wsgi-express/sandbox.routes.loutilities.com/httpd.conf: Cannot load /var/www/sandbox.routes.loutilities.com/venv/lib/python3.10/site-packages/mod_wsgi/server/mod_wsgi-py310.cpython-310-x86_64-linux-gnu.so into server: /var/www/sandbox.routes.loutilities.com/venv/lib/python3.10/site-packages/mod_wsgi/server/mod_wsgi-py310.cpython-310-x86_64-linux-gnu.so: undefined symbol: GENERAL_NAME_free
I notice the following from the python build directory
$ grep -R GENERAL_NAME_free .
./Modules/_ssl.c: sk_GENERAL_NAME_pop_free(names, GENERAL_NAME_free);
Binary file ./Modules/_ssl.o matches
Binary file ./python matches
Binary file ./libpython3.10.a matches Binary file
./Programs/_testembed matches
and libpython3.10.a is in /usr/local/lib and /usr/local/lib/python3.10/config-3.10-x86_64-linux-gnu
$ find /usr/local/lib -name libpython3.10.a
/usr/local/lib/libpython3.10.a
/usr/local/lib/python3.10/config-3.10-x86_64-linux-gnu/libpython3.10.a
I'm not sure how mod_wsgi in the virtual environment should be set up to find the missing library file.
What other debugging steps should I take?
Note I also posted to https://groups.google.com/g/modwsgi about a week ago but didn't see any responses.
This was answered in https://groups.google.com/g/modwsgi/c/KZZQHpFclGA/m/9St6XuSRAAAJ
Essentially, the problem is that python is using a different openssl than apache.

docker-compose throws error when not run with sudo [duplicate]

I installed Docker in my machine where I have Ubuntu OS.
When I run:
sudo docker run hello-world
All is ok, but I want to hide the sudo command to make the command shorter.
If I write the command without sudo
docker run hello-world
That displays the following:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.
The same happens when I try to run:
docker-compose up
How can I resolve this?
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log in to the new docker group (to avoid having to log out / log in again; but if not enough, try to reboot):
$ newgrp docker
Check if docker can be run without root
$ docker run hello-world
Reboot if still got error
$ reboot
Warning
The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system, see Docker Daemon Attack Surface..
Taken from the docker official documentation:
manage-docker-as-a-non-root-user
After an upgrade I got the permission denied.
Doing the steps of 'mkb' post install steps don't have change anything because my user was already in the 'docker' group; I retry-it twice any way without success.
After an search hour this following solution finaly worked :
sudo chmod 666 /var/run/docker.sock
Solution came from Olshansk.
Look like the upgrade have recreate the socket without enough permission for the 'docker' group.
Problems
This hard chmod open security hole and after each reboot, this error start again and again and you have to re-execute the above command each time. I want a solution once and for all. For that you have two problems :
1) Problem with SystemD : The socket will be create only with owner 'root' and group 'root'.
You can check this first problem with this command :
ls -l /lib/systemd/system/docker.socket
If every this is good, you should see 'root/docker' not 'root/root'.
2 ) Problem with graphical Login : https://superuser.com/questions/1348196/why-my-linux-account-only-belongs-to-one-group
You can check this second problem with this command :
groups
If everything is correct you should see the docker group in the list.
If not try the command
sudo su $USER -c groups
if you see then the docker group it is because of the bug.
Solutions
If you manage to to get a workaround for the graphical login, this should do the job :
sudo chgrp docker /lib/systemd/system/docker.socket
sudo chmod g+w /lib/systemd/system/docker.socket
But If you can't manage this bug, a not so bad solution could be this :
sudo chgrp $USER /lib/systemd/system/docker.socket
sudo chmod g+w /lib/systemd/system/docker.socket
This work because you are in a graphical environnement and probably the only user on your computer.
In both case you need a reboot (or an sudo chmod 666 /var/run/docker.sock)
Add docker group
$ sudo groupadd docker
Add your current user to docker group
$ sudo usermod -aG docker $USER
Switch session to docker group
$ newgrp - docker
Run an example to test
$ docker run hello-world
Add current user to docker group
sudo usermod -aG docker $USER
Change the permissions of docker socket to be able to connect
to the docker daemon /var/run/docker.sock
sudo chmod 666 /var/run/docker.sock
I solve this error with the command :
$ sudo chmod 666 /var/run/docker.sock
It only requires the changes in permission of sock file.
sudo chmod 666 /var/run/docker.sock
this will work definitely.
If creating a docker group and adding your user to it doesn't work (the best solution, described in the previous answers), then this one is the second best alternative:
sudo chown $USER /var/run/docker.sock
What it does is changing the ownership of the docker.sock file to your user.
Note: It's a really bad practice to use chmod 666, because it gives permissions to practically everyone to access and modify the docker.sock file.
Fix Docker Issue: (Permission denied)
Create the docker group if it does not exist: sudo groupadd docker
See number of super users in the available system: grep -Po '^sudo.+:\K.*$' /etc/group
Export the user in linux command shell: export USER=demoUser
Add user to the docker group: sudo usermod -aG docker $USER
Run the following command/ Login or logout: newgrp docker
Check if docker runs ok or not: docker run hello-world
Reboot if you still get an error: reboot
If it does not work, run this command:
sudo chmod 660 /var/run/docker.sock
You can always try Manage Docker as a non-root user paragraph in the https://docs.docker.com/install/linux/linux-postinstall/ docs.
After doing this also if the problem persists then you can run the following command to solve it:
sudo chmod 666 /var/run/docker.sock
We always forget about ACLs . See setfacl.
sudo setfacl -m user:$USER:rw /var/run/docker.sock
To fix that issue, I searched where is my docker and docker-compose installed. In my case, docker was installed in /usr/bin/docker and docker-compose was installed in /usr/local/bin/docker-compose path. Then, I write this in my terminal:
To docker:
sudo chmod +x /usr/bin/docker
To docker-compose:
sudo chmod +x /usr/local/bin/docker-compose
Now I don't need write in my commands docker the word sudo
/***********************************************************************/
ERRATA:
The best solution of this issue was commented by #mkasberg. I quote comment:
That might work, you might run into issues down the road. Also, it's a security vulnerability. You'd be better off just adding yourself to the docker group, as the docs say. sudo groupadd docker, sudo usermod -aG docker $USER.
Docs: https://docs.docker.com/install/linux/linux-postinstall/
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/json: dial unix /var/run/docker.sock: connect: permission denied
sudo chmod 666 /var/run/docker.sock
This fix my problem.
ubuntu 21.04 systemd socket ownership
Let me preface, this was a perfectly suitable solution for me during local development and I got here searching for ubuntu docker permission error so i'll just leave this here.
I didn't own the unix socket, so I chowned it.
sudo chown $(whoami):$(whoami) /var/run/docker.sock
Another, more permanent solution for your dev environment, is to modify the user ownership of the unix socket creation. This will give your user the ownership, so it'll stick between restarts:
sudo nano /etc/systemd/system/sockets.target.wants/docker.socket
docker.socket:
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=YOUR_USERNAME_HERE
SocketGroup=docker
[Install]
WantedBy=sockets.target
Seriously guys. Do not add Docker in your groups or modifies the socket posix (without a hardening SELinux), it's a simple way to make a root privesc. Just add an alias in your .bashrc, it's simpler and safer as : alias dc='sudo docker'.
lightdm and kwallet ship with a bug that seems to not pass the supplementary groups at login. To solve this, I also, beside sudo usermod -aG docker $USER, had to comment out
auth optional pam_kwallet.so
auth optional pam_kwallet5.so
to
#auth optional pam_kwallet.so
#auth optional pam_kwallet5.so
in /etc/pam.d/lightdm before rebooting, for the docker-group to actually have effect.
bug: https://bugs.launchpad.net/lightdm/+bug/1781418 and here: https://bugzilla.redhat.com/show_bug.cgi?id=1581495
Rebooting the machine worked for me.
$ reboot
This work for me:
Get inside the container and modify the file's ACL
sudo usermod -aG docker $USER
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
It's a better solution than use chmod.
use this command
sudo usermod -aG docker $USER
then restart your computer this worked for me.
you can follow these steps and this will work for you:
create a docker group sudo groupadd docker
add your user to this group sudo usermod -aG docker $USER
list the groups to make sure that docker group created successfully by running this command groups
run the following command also to change the session for docker group newgrp docker
change the group ownership for file docker.socksudo chown root:docker /var/run/docker.sock
change the ownership for .docker directory sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
finally sudo chmod g+rwx "$HOME/.docker" -R
After that test you can run docker ps -a
I ran into a similar problem as well, but where the container I wanted to create needed to mount /var/run/docker.sock as a volume (Portainer Agent), while running it all under a different namespace. Normally a container does not care about which namespace it is started in -- that is sort of the point -- but since access was made from a different namespace, this had to be circumvented.
Adding --userns=host to the run command for the container enabled it to use the attain the correct permissions.
Quite a specific use case, but after more research hours than I want to admit I just thought I should share it with the world if someone else ends up in this situation :)
i try this commend with sudo commend and it was ok.sudo docker pull hello-world or sudo docker run hello-world
In the Linux environment, after installing docker and docker-compose reboot is required for work docker better to avoid this issue.
$ sudo systemctl restart docker
It is definitely not the case the question was about, but as it is the first search result while googling the error message, I'll leave it here.
First of all, check if docker service is running using the following command:
systemctl status docker.service
If it is not running, try starting it:
sudo systemctl start docker.service
... and check the status again:
systemctl status docker.service
If it has not started, investigate the reason. Probably, you have modified a config file and made an error (like I did while modifying /etc/docker/daemon.json)
The Docker daemon binds to a Unix socket instead of a TCP port.
By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
To create the docker group and add your user:
Create the docker group
sudo groupadd docker
Add your user to the docker group
sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.
On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.
On Linux, you can also run the following command to activate the changes to groups:
newgrp docker
Verify that you can run docker commands without sudo. The below command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits
docker run hello-world
If you initially ran Docker CLI commands using sudo before adding your user to the docker group, you may see the following error, which indicates that your ~/.docker/ directory was created with incorrect permissions due to the sudo commands.
WARNING: Error loading config file: /home/user/.docker/config.json -
stat /home/user/.docker/config.json: permission denied
To fix this problem, either remove the ~/.docker/ directory (it is recreated automatically, but any custom settings are lost), or change its ownership and permissions using the following commands:
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R
All other post installation steps for docker on linux can be found here https://docs.docker.com/engine/install/linux-postinstall/
The most straightforward solution is to type
sudo chmod 666 /var/run/docker.sock
every time you boot your machine. However, this method defeats any system security that may be in place and opens up the Docker socket to everybody. If this is acceptable to you -e.g.: the only user of your machine- then use it.
Nevertheless, it will be required every time you boot your machine, you can make it run with booting by adding
start on startup
task
exec chmod 666 /var/run/docker.sock
to the /etc/init/docker-chmod.conf file.
I tried all the described methods and nothing helped to solve the problem. The solution was to use the --use-drivers parameter when running selenoid and selenoid-ui. Below is the full listing of my Dockerfile.
FROM selenoid/chrome
USER root
RUN apt-get update
RUN apt-get -y install docker.io
RUN curl -s https://aerokube.com/cm/bash | bash
RUN ./cm selenoid start --vnc --use-drivers
RUN ./cm selenoid-ui start --use-drivers
EXPOSE 4444 8080
CMD ["-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video/"]
In my case it was the process itself (CI server agent) that was trying to run a docker command wasn't able to run it, but when I tried to run same command from within the same user it worked.
Restarting the daemon that runs CI server agent solved the problem.
The reason why command wasn't working from within agent before is because the agent was running before I installed docker and granted docker group permissions, and agent process used cached old permissions and was failing. Restarting the process dropped the cache and make things work out.
As a shortest answer for linux user ->
Simply try any command as super user with "sudo"
Eg:- sudo docker-compose up
After Docker Installation on Centos. While running below command I got below error.
[centos#aiops-dev-cassandra3 ~]$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.soc k/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
Change Group and Permission for docker.socket
[centos#aiops-dev-cassandra3 ~]$ ls -l /lib/systemd/system/docker.socket
-rw-r--r--. 1 root root 197 Nov 13 07:25 /lib/systemd/system/docker.socket
[centos#aiops-dev-cassandra3 ~]$ sudo chgrp docker /lib/systemd/system/docker.socket
[centos#aiops-dev-cassandra3 ~]$ sudo chmod 666 /var/run/docker.sock
[centos#aiops-dev-cassandra3 ~]$ ls -lrth /var/run/docker.sock
srw-rw-rw-. 1 root docker 0 Nov 20 11:59 /var/run/docker.sock
[centos#aiops-dev-cassandra3 ~]$
Verify by using below docker command
[centos#aiops-dev-cassandra3 ~]$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
[centos#aiops-dev-cassandra3 ~]$
After you installed docker, created 'docker' group and added user to it, edit docker service unit file:
sudo nano /usr/lib/systemd/system/docker.service
Add two lines into the section [Service]:
SupplementaryGroups=docker
ExecStartPost=/bin/chmod 666 /var/run/docker.sock
Save the file (Ctrl-X, y, Enter)
Run and enable the Docker service:
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl enable docker

How to avoid error "conda --version: conda not found" in az ml run --submit-script command?

I would like to run a test script on an existing compute instance of Azure using the Azure Machine Learning extension to the Azure CLI:
az ml run submit-script test.py --target compute-instance-test --experiment-name test_example --resource-group ex-test-rg
I get a Service Error with the following error message:
Unable to run conda package manager. AzureML uses conda to provision python\nenvironments from a dependency specification. To manage the python environment\nmanually instead, set userManagedDependencies to True in the python environment\nconfiguration. To use system managed python environments, install conda from:\nhttps://conda.io/miniconda.html
But when I connect to the compute instance through the Azure portal and select the default Python kernel, conda --version prints 4.5.12. So conda is effectively already installed on the compute instance. This is why I do not understand the error message.
Further information on the azure versions:
"azure-cli": "2.12.1",
"azure-cli-core": "2.12.1",
"azure-cli-telemetry": "1.0.6",
"extensions": {
"azure-cli-ml": "1.15.0"
}
The image I use is:
mcr.microsoft.com/azure-cli:latest
Can somebody please explain as to why I am getting this error and help me resolve the error? Thank you!
EDIT: I tried to update the environment in which the az ml run-command is run.
Essentially this is my GitLab job. The installation of miniconda is a bit complicated as the azure-cli uses an alpine Linux image (reference: Installing miniconda on alpine linux fails). I replaced some names with ... and cut out some irrelevant pieces of code.
test:
image: 'mcr.microsoft.com/azure-cli:latest'
script:
- echo "Download conda"
- apk --update add bash curl wget ca-certificates libstdc++ glib
- wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-node-bower/master/sgerrand.rsa.pub
- curl -L "https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.23-r3/glibc-2.23-r3.apk" -o glibc.apk
- apk del libc6-compat
- apk add glibc.apk
- curl -L "https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.23-r3/glibc-bin-2.23-r3.apk" -o glibc-bin.apk
- apk add glibc-bin.apk
- curl -L "https://github.com/andyshinn/alpine-pkg-glibc/releases/download/2.25-r0/glibc-i18n-2.25-r0.apk" -o glibc-i18n.apk
- apk add --allow-untrusted glibc-i18n.apk
- /usr/glibc-compat/bin/localedef -i en_US -f UTF-8 en_US.UTF-8
- /usr/glibc-compat/sbin/ldconfig /lib /usr/glibc/usr/lib
- rm -rf glibc*apk /var/cache/apk/*
- echo "yes" | curl -sSL https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -o miniconda.sh
- echo "Install conda"
- (echo -e "\n"; echo "yes"; echo -e "\n"; echo "yes") | bash -bfp miniconda.sh
- echo "Installing Azure Machine Learning Extension"
- az extension add -n azure-cli-ml
- echo "Azure Login"
- az login
- az account set --subscription ...
- az configure --defaults group=...
- az ml folder attach -w ...
- az ml run submit-script test.py --target ... --experiment-name hello_world --resource-group ...
You need conda in your base image for container based environment. You can extend the base image by installing conda using base_dockerfile instead of base_image
https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment.dockersection?view=azure-ml-py
or, which if that works for you, use one of the AzureML base docker images.
If you do not need any python dependencies on top your base image you can set user_managed_dependencies to True and base image will be used as is and no additional dependencies will be installed
https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment.pythonsection?view=azure-ml-py
One needs to pass the --workspace-name argument to be able to run it on Azure's compute target and not on the local compute target:
az ml run submit-script test.py --target compute-instance-test --experiment-name test_example --resource-group ex-test-rg --workspace-name test-ws
Use:
runconfig.environment.python.user_managed_dependencies = True
That should solve the issue

Vagrant : ENOSPC completely block box

I'm following this tutorial : https://docs.pybossa.com/installation/vagrant/
But as I don't have rights on my windows 7 machine I used my VM to deploy vagrant & co. Once I finally got it working, a started the installation of requirements and the box is completely stuck.
Host (VM) : Linux ipf7028 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Vagrant provider : default, VirtualBox
Once box finally started, I had to execute in ssh python run.py but I had an error, as I'm a newbie in python I just installed pip and did pip install -r requirements.txt that was provided.
And after some downloads, global crash with following error :
/opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:836:in `initialize': No space left on device # rb_sysopen - /root/.vagrant.d/perm_test_YCKSPNYMOHEIFYNPVJKQYEMPHUIXGQUN (Errno::ENOSPC)
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:836:in `open'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:836:in `open'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:836:in `setup_home_path'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:135:in `initialize'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:145:in `new'
from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:145:in `<main>'
It is obviously something with disk space but I can't figure out where... Plus I do not have any access on the vagrant box, no command line are working... destroy, halt, ssh, status everything end up with the same error output.
The provided VagrantFile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
# PyBossa Vagrantfile
VAGRANTFILE_API_VERSION = "2"
# Ansible install script for Ubuntu
$ansible_install_script = <<SCRIPT
export DEBIAN_FRONTEND=noninteractive
echo Check if Ansible existing...
if ! which ansible >/dev/null; then
echo update package index files...
apt-get update -qq
echo install Ansible...
apt-get install -qq ansible
fi
SCRIPT
$ansible_local_provisioning_script = <<SCRIPT
export DEBIAN_FRONTEND=noninteractive
export PYTHONUNBUFFERED=1
echo PyBossa provisioning with Ansible...
ansible-playbook -u vagrant /vagrant/provisioning/playbook.yml -i /vagrant/provisioning/ansible_hosts -c local
SCRIPT
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "bento/ubuntu-16.04"
config.vm.provider "virtualbox" do |v|
v.memory = 1024
end
config.vm.network :forwarded_port, host: 5000, guest: 5000
config.vm.network :forwarded_port, host: 5001, guest: 5001
# turn off warning message `stdin: is not a tty error`
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
# be sure that there is Ansible for local provisioning
config.vm.provision "shell", inline: $ansible_install_script
# do the final Ansible local provisioning
config.vm.provision "shell", inline: $ansible_local_provisioning_script
end
The box was supposed to work without any problems, even virtualenv was supposed to handle some problems about requirements... (never used before).
I am missing something ?
Thanks to this Where does Vagrant download its .box files to? I just destroy the previous vargant environment and changed its location (setting VAGRANT_HOME) where I have more space.
Im running into other issues now, but this thread is over.

how to connect oracle database from python from unix server

How to connect oracle database server from python inside unix server ?
I cant install any packages like cx_Orcale, pyodbc etc.
Please consider even PIP is not available to install.
It my UNIX PROD server, so I have lot of restriction.
I tried to run the sql script from sqlplus command and its working.
Ok, so there is sqlplus and it works, this means that oracle drivers are there.
Try to proceed as follows:
1) create a python virtualenv in your $HOME. In python3
python -m venv $HOME/my_venv
2) activate it
source $HOME/my_venv/bin/activate[.csh] # .csh is for cshell, for bash otherwise
3) install pip using python binary from you new virtualenv, it is well described here: https://pip.pypa.io/en/stable/installing/
TL;DR:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get_pip.py (this should install pip into your virtualenv as $HOME/my_env/bin/pip[3]
4) install cx_Oracle:
pip install cx_Oracle
Now you should be able to import it in your python code and connect to an oracle DB.
I tried to connect Oracle database via SQLPLUS and I am calling the script with below way :
os.environ['ORACLE_HOME'] = '<ORACEL PATH>'
os.chdir('<DIR NAME>')
VARIBALE=os.popen('./script_to_Call_sql_script.sh select.sql').read()
My shell script: script_to_Call_sql_script.sh
#!/bin/bash
envFile=ENV_FILE_NAME
envFilePath=<LOACTION_OF_ENV>${envFile}
ORACLE_HOME=<ORACLE PATH>
if [[ $# -eq 0 ]]
then
echo "USAGES: Please provide the positional parameter"
echo "`$basename $0` <SQL SCRIPT NAME>"
fi
ECR=`$ORACLE_HOME/bin/sqlplus -s /#<server_name><<EOF
set pages 0
set head off
set feed off
#$1;
exit
EOF`
echo $ECR
Above things help me to do my work done on Production server.

Categories

Resources