I'm aware there are many similar questions but I have been through them all to no avail.
On Ubuntu 18.04, I have Python 2 and Python 3.6. I create a venv using the command below and attempt to install a package using pip. However, it attempts to install on the global system and not in the venv.
python3 -m venv v1
When I run 'which python' it correctly picks the python within the venv. I have checked he v1/bin folder and pip is installed. The path within the pip script is correctly pointed to toward python in the venv.
I have tried reinstalling python3 and venv, destroying and recreating the virtual environment and many other things. Wondering is there some rational way to understand and solve this.
The problem in my case was that the mounted drive I was working on was not mounted as executable. So pip couldn't be executed from within the venv on the mount.
This was confirmed because I was able to get a pip install using 'python -m pip install numpy' but when importing libraries, e.g. 'import numpy', was then faced with further error of:
multiarray_umath.cpython-36m-x86_64-linux-gnu.so: failed to map segment from shared object
which led back to the permissions issue as per github issue below. Fix for that by dvdabelle in comments then fixes dependent and original issue.
https://github.com/numpy/numpy/issues/15102
In his case, he could just switch drive. I have to use this drive. So the fix was to unmount my /data disk where I was working and remount it with exec option!
sudo umount /data
sudo mount -o exec /dev/sda4 /data
'which pip' now points to the pip in the venv correctly
Note: to make it permanent add the exec switch to the line for the drive in fstab as per https://download.tuxfamily.org/linuxvillage/Informatique/Fstab/fstab.html (make exec the last parameter in the options or user will override it) E.g.
UUID=1332d6c6-da31-4b0a-ac48-a87a39af7fec /data auto rw,user,auto,exec 0 0
Related
I'm new to python, and I was wondering if you could help me run a python script. I'm trying to run a script called PunchBox from Github: https://github.com/psav/punchbox. So far, I have Python 3.9.5 and Git Bash.
In the GitHub page, it says:
To install, clone the repo, cd into it and then execute the following:
virtualenv -p python2 .pb2
source .pb2/bin/activate
pip install -U pip
pip install .
What does this mean exactly? Where do I run this code?
So far, I tried downloading the zip file from GitHub, installing Python 3.5.9, using cmd, finding the directory with cd, and running that code; but got an error:
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name punchbox was given, but was not able to be found.
error in punchbox setup command: Error parsing C:\Users\Mi\Downloads\punchbox-master\punchbox-master\setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name punchbox was given, but was not able to be found.
There's also a requirements.txt that lists additional scripts needed:
pre-commit
click
mido
pbr
PyYAML
svgwrite
Do these install automatically upon running the script for the first time?
I'm a little confused why I'm getting an error. Do you know what I'm doing wrong?
Thank you so much!
Giovanni
I assume you are new to programming. You have to write these lines in a terminal.
On Windows, it is Command Prompt or PowerShell Applications (latter preferred). On macOS, it is terminal
Copy all these lines at once, and paste them to your preferred terminal. The terminal will automatically run these one after the another.
FYI: Venv is a python package to create a virtual environment. The preceding commands set up the environment. Now install the required dependencies using this command instead of the last command (pip install .)
pip install -r requirements.txt
Based on your comment, it looks like you don't have virtualenv installed in your system. You may install it using the command pip install virtualenv.
Now, as you are using a Windows machine, you may open a Command Prompt or Windows PowerShell window and navigate to the directory where your cloned project resides.
Now, execute the following commands.
virtualenv -p python2 .pb2
.pb2\Scripts\activate.bat
pip install -U pip
pip install -r requirements.txt
Once you are done working in your virtual environment (which is named .pb2), you may close it by executing deactivate command.
#Giovanni T.
See, as far as you have installed Python and also downloaded the GitHub Repository as a zip file.
pip install -r requirements.txt
Just run this command.
Please make sure that the directory is pointing to the folder where this requirements.txt file is stored.
I'm on an OS Catalina and I'm trying to install and run Mephisto (see https://github.com/facebookresearch/mephisto/blob/master/docs/quickstart.md). I created a python3 virtual environment and then went to the directory and ran
sudo pip3 install -e .
This seems to have run fine as I can now run mephisto and see the list of commands and options. However when I run mephisto register mturk it throws No module named 'mephisto.core.argparse_parser' because of an import statement in the python file. This seems like a general issue of a module installing but not importing the module properly, but would appreciate help in how to fix it. Is it because my $PYTHONPATH is currently empty?
Mephisto Lead here! This seems to have been a case of unfortunate timing, as we were in the midst of a refactor there and some code got pushed to master that should've received more scrutiny. We'll be moving to stable releases via PyPI in the near future to prevent things like this!
I created a python3 virtual environment and then went to the directory and ran
sudo pip3 install -e .
You should not have used sudo to install this library, if you meant to install it in a virtual environment. By using sudo the library probably got installed in the global environment (not in the virtual environment).
Typically:
create a virtual environment:
python3 -m venv path/to/venv
install tools and libraries in this environment with:
path/to/venv/bin/python -m pip install Mephisto
use python in the virtual environment:
path/to/venv/bin/python -c 'import mephisto'
use a tool in the virtual environment:
path/to/venv/bin/mephisto
Is it because my $PYTHONPATH is currently empty?
Forget PYTHONPATH. Basically one should never have to modify this environment variable (this is almost always ill-informed advice to get PYTHONPATH involved).
Check the __init__.py file is in the module's file directory. If not try creating an empty one.
If I install a virtualenv on my local machine, activate it and try to run python3 then it works fine (with the imported modules). However, after I send it to the live server (using scp and filezilla) it gives the error:
-bash: /<path>/venv4/bin/python3: cannot execute binary file: Exec format error
This also happens with python and python3.8 in the same package.
I have tried reinstalling virtualenv and pipx, recreating the virtualenv and reuploading a few times.
It seems that it can't find the module, as when I activate the virtualenv on the live server and type "which python3" then it shows me the system python3:
/usr/bin/python3
It also does not work if I try to execute the venv's python3 directly, using the full path.
The reason I'm doing this is because the old virtualenv I was using has stopped working because it can't seem to find the installed modules anymore. I'm not sure why.
Any help would be much appreciated.
I believe some pip packages contain more than just python code, and must be compiled. If your host OS is different from your server OS, or you have different libraries installed, the host-compiled code will not be compatible with your server.
Common practice is to create a file with a list of required packages, using something like
pip freeze > requirements.txt
and rebuild the environment on the server, using something like
pip install -r requirements.txt
I was hoping someone might be able to provide a resource that will help me install python 3.6.0 on a shared hosting account at Bluehost. I’ve tried using the documentation for python 2.7 but have been unsuccessful to date. The current state of the machine now is if I run python –V it says 2.6.6 . If, however I place:
export PATH=$HOME/python/Python-3.6.0/:$PATH
in the .bashrc file in my home directory and then run python –V it says 3.6.0 However I am unable to get pip to work. I also noticed that during the python setup procedure permission was denied on a number of files.
I am really at a lost as there seems to be very little documentation for how to do this on a shared hosting environment. Your help would be greatly appreciated.
here's a link to the instructions I followed python
I thought pip would be installed as it said pip 9.0.2 was installed but when I try to run it it say cxommand not found. When I tried easy_install pip I got back the following error message:
[Errno 30] Read-only file system: '/usr/lib/python2.6/site-packages/test-easy-install-13141.write-test'
The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/usr/lib/python2.6/site-packages/
You cannot install the package because it is trying to install them in the system directory, and you do not have write access.
If you can, use a virtualenv. Of course this requires virtualenv be installed.
Put the virtualenv somewhere you have write access to. For example, use these instructions.
Enter the following commands to download and extract Python
3.6 to your hosting account.
mkdir ~/python
cd ~/python
wget http://www.python.org/ftp/python/3.6.0/Python-3.6.0.tgz
tar zxfv Python-3.6.0.tgz
find ~/python -type d | xargs chmod 0755
cd Python-3.6.0
Install Python
Once extracted you can use the following commands to
configure and install Python.
./configure --prefix=$HOME/python
make
make install
Modify the .bashrc
For your local version of python to load you will need to add
it to the .bashrc file.
vim ~/.bashrc
Press i
Enter:
export PATH=$HOME/python/Python-3.6.0/:$PATH
export PYTHONPATH=$PYTHONPATH:$HOME/python/python3.6/site-packages/
Write the changes (press ESC) and close vim:
:wq
Press Enter
source ~/.bashrc
Now to use pip:
python -m pip install package-of-interest
You could also ask the system administrator to install the package for you. This might be the only real option if virtualenv hasn't been installed. Ask the administrator to install virtualenv.
I'm still getting familiar with python and python eggs so sorry if this is a stupid question. I want to know why easy_install appears to install the egg for the whole server to use rather than just locally for the account that tried to install it.
I created a simple helloworld module/egg and tried to install it on a server I have an account on. However, the account doesn't have root access (it's a tester's account). I get a "Permission denied" error message when installing it. When installing the module, it is trying to install to /usr/local/lib/python2.7/site_packages/blah/blah/blah. It's pretty clear it's b/c I don't have root access to write to this location.
easy_install hello-1.0-py2.7.egg
On my laptop (my account has root access), I can run the cmd above and see the module is installed by running 'pip freeze'. The slight difference is that Anaconda is running/installed on my laptop and seemed to be doing the package management for me.
So back to my original question; how does easy_install install eggs that we create ourselves? I was hoping/assuming it would install the module in my tester's account and not to /usr/local/lib/blha/blah/blah for all users to use/access. Is this an incorrect assumption? If this is incorrect thinking, how would someone install a module/egg where the account doesn't have root access? Thanks.
Se per easy_install or pip as a limited user? you'll want to use the --prefix option to easy_install and/or -d or -s.
I believe you could do something as simple as:
easy_install --prefix=$HOME hello-1.0-py2.7.egg
An option is to use virtualenv which allows you to create multiple virtual environments for Python, each with its own set of libraries.
Just create a virtualenv and then you can then install your module within it without requiring write access to the system Python installation.
There is a tutorial here: http://simononsoftware.com/virtualenv-tutorial/, but simply install virtualenv then:
$ cd $HOME
$ virtualenv test
$ cd test
$ source bin/activate
$ easy_install /path/to/hello-1.0-py2.7.egg
The package should be installed into ~/test/lib/python2.7/site-packages