Overall, pip is working fine on my server. I have just installed the package waitress and the installation seems successful. I checked it with pip freeze:
$ pip freeze | grep waitress
waitress==2.1.0
Waitress also can be imported via python3:
>>> import waitress
>>>
However, waitress-serve cannot be executed:
$ waitress-serve
Command 'waitress-serve' not found, but can be installed with:
apt install python3-waitress
Please ask your administrator.
I am not a root user on this server. Could this be a reason why the package was installed partially, or am I speculating here?
Since I am not authorized to run apt install and since the simple pip install worked in my virtualenv, I would like to be able to get this to work without using the suggested apt install python3-waitress command.
The conclusion is that, while it is installed both inside and outside virtualenv, it is only actually executable inside it.
When installable Python packages give you an actual entry point, generally it will not be on your path.
When you use a virtual environment, activating the environment puts various parts of that environment onto the path temporarily. This is intended to ensure that commands like python or python3 run the environment's Python, but it also allows those entry points to be found on the path.
A system Python installation (here I mean, not just a Python that comes with your operating system, but also one that you install manually after the fact - but not a virtual environment) will generally not have its library folders on the path by default - only enough to make python and pip work. (On Windows, often even these are not added to the path; instead, a program py is placed in the Windows installation folder, and it does the work of looking for Python executables.) Even if you are allowed to install things directly into a system Python (and you should normally not do this if you can avoid it, even if you're allowed to), they won't be findable there.
Of course, you could execute these things just fine by explicitly specifying their paths. However, the normally correct approach is to just ensure that, when you want to run the program, the same virtual environment is activated into which you installed the package.
(On my system, I have one main "sandbox" virtual environment that I use for all my projects - unless I am specifically testing the installation process, or testing how the code works on a different version of Python. Then I use a wrapper script to open a terminal window, navigate to a folder that contains all my projects, and activate the environment.)
if waitress is installed inside a virtual environment, I think you may have accidentally (or not) gotten out of said virtual environment.
If you are running a virtual environment, you can try the following commands, one after the other:
source venv/bin/activate #venv is assumed to be the name of the virtual environment you are using.
pip install waitress
waitress-serve
anytime you need to use waitress you will need to activate the virtual environment yet again:
source venv/bin/activate
waitress-serve
Please take note I am assuming you are running in a Linux environment
If this is not the issue you are facing, then feel free to expaund a bit more on your question.
Edit: Installing with pip and running it worked perfectly on my virtualenv; see the picture below, running on python 3.8.10
Related
If I install a virtualenv on my local machine, activate it and try to run python3 then it works fine (with the imported modules). However, after I send it to the live server (using scp and filezilla) it gives the error:
-bash: /<path>/venv4/bin/python3: cannot execute binary file: Exec format error
This also happens with python and python3.8 in the same package.
I have tried reinstalling virtualenv and pipx, recreating the virtualenv and reuploading a few times.
It seems that it can't find the module, as when I activate the virtualenv on the live server and type "which python3" then it shows me the system python3:
/usr/bin/python3
It also does not work if I try to execute the venv's python3 directly, using the full path.
The reason I'm doing this is because the old virtualenv I was using has stopped working because it can't seem to find the installed modules anymore. I'm not sure why.
Any help would be much appreciated.
I believe some pip packages contain more than just python code, and must be compiled. If your host OS is different from your server OS, or you have different libraries installed, the host-compiled code will not be compatible with your server.
Common practice is to create a file with a list of required packages, using something like
pip freeze > requirements.txt
and rebuild the environment on the server, using something like
pip install -r requirements.txt
I have a python installation on my pc (windows 10) which comes from Anaconda. I am a data scientist and using conda as a package manager is very convenient for me.
However, sometimes I want to develop a small app or script to share with my colleagues. In these cases I create a project folder and python -m venv .venv inside it.
This way, I can install only the essential packages I need, and later share the requirements.txt file.
The issue I'm having, is that the python interpreter being used is still the default one, namely, the one that came with Anaconda, even if I activate the virtual environment and deactivate the conda one.
Specifically, if I run python in the terminal, I get this warning message:
Warning:
This Python interpreter is in a conda environment, but the environment has
not been activated. Libraries may fail to load. To activate this environment
please see https://conda.io/activation
This is rather inconvenient. My base python installation is 3.7, but if I wanted to use an earlier version, or 3.8, I can't seem to be able to choose.
I would expect that the python executable being used is the one in the current active environment, but this doesn't seem to be the case.
How can I obtain that?
First you have to install the version of python you want to use in your venv. It must already be available somewhere on your system to create a venv using it.
Then instead of just python -m venv .venv you specify which python with the full path : /path/to/pythonX.Y -m venv .venv
You can't have a venv that shares multiple versions of python, to my knowledge at least.
I have python installed through windows store and I can install programs using pip, but when I try to run said programs, they fail to execute in powershell.
How can I make sure that the necessary "scripts" folder is in my path? I never faced these problems when installing from executable.
For example, "pip install ntfy" runs successfully in Powershell.
The command "ntfy send test" fails telling me the term is not part of a cmdlet, function, etc. etc.
The 'ntfy' program is located here /mnt/c/Users/vlouvet/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0/LocalCache/local-packages/Python37/Scripts/ntfy.exe
What is the recommended way of editing my path so that programs installed via pip are available across windows store updates of the Python language?
In advance
I highly recommend you not to use python installed from the Windows Store, because you'll face such errors, and even more nasty ones.
The easy solution
Create a virtual environment on a more accessible folder, for example in C:\Users\<user>\python. To do so, do the following:
Using PowerShell, go to your user folder, using cd(Note that usually PowerShell already starts inside your user folder. This is an important setting to have, and if not, you should change your powershell starting point to this folder for the future.);
Now that you're in your user folder, type in the PowerShell mkdir python; cd python;
Now, to create a virtual environment, type python -m venv venv;
(You can verify that your virtual enviroment has been created by listing the folders, with the command ls);
You have created a virtual environment. Now, you need to activate it. To activate, run the following: ./venv/Scripts/activate;
Now, you have fully created and activated a virtual environment for your current PowerShell session. You can now install any packages / programs using pip.
After that, the only thing that you need to do is to add C:\Users\<user>\python\venv\Scripts to your Path, and you're good to go.
Caveats
By adding this folder to your Path, you may be using an outdated python version in the future, since the Scripts folder inside your virtual environment also adds a python executable that will be enabled in the path.
The recommended solution
As I stated before, I do not recommend to have the Microsoft Store version of python installed on your machine. That said, you're probably using it for the conveniences of having the latest Python version installed as soon as they're released. To alleviate this need while also getting rid of your MS Store Python. I recommend you using Chocolatey to install python (and pretty much any other programs for development).
What is Chocolatey?
Chocolatey is a package manager for Windows, pretty much like apt-get for Ubuntu Linux or HomeBrew for MacOS. By using a package manager, you get rid of the hassle of always having to execute the (mostly annoying) install wizards on windows.
To install Chocolatey:
Go to chocolatey.org/install and follow the install instructions;
(Recommended: Take a look on their documentation later to see what Chocolatey is capable of);
With Chocolatey installed, take a test drive and see if it is working properly by running choco -v in PowerShell;
By having Chocolatey installed, you can now run choco install python -y. Let's break down this command:
choco install -> The package installer of chocolatey
python -> the name of the package you want to install
-y -> This tells the installer to skip install verification by saying "Yes to All" scripts that will be executed in order to install a package.
With python installed from chocolatey, you can also see that Python is already added to your path - This means that any python package or executable installed globally will be now available on your machine!
Hope I could help you!
The above answer is good but I managed to get it to work by doing the following.
Find your installation under C:\Users\"your user"\AppData\Local\Packages it will be named something like PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0
Open your windows settings in the start menu
In search type Environment variables. Edit environment variables for your account should pop up. Click it
In the top box find Path, click it
On the right Click new and enter C:\Users\"your user"\AppData\Local\Packages\"python install directory name from 1. here"\LocalCache\local-packages\Python37\Scripts inside the little box under the last item in the list
open a new cmd prompt and type the script you wanted it should work.
On Windows you can find the user base binary directory by running
python -m site --user-site
and replacing site-packages with Scripts.
For example, this could return
C:\Users\Username\AppData\Roaming\Python36\site-packages
so you would need to set your PATH to include
C:\Users\Username\AppData\Roaming\Python36\Scripts
You can set your user PATH permanently in the Control Panel. You may need to log out for the PATH changes to take effect.
While working on a new python project and trying to learn my way through virtual environments, I've stumbled twice with the following problem:
I create my virtual environment called venv. Running pip freeze shows nothing.
I install my dependencies using pip install dependency. the venv library starts to populate, as confirmed by pip freeze.
After a couple of days, I go back to my project, and after activating the virtual environment via source venv/bin/activate, when running pip freeze I see the whole list of libraries installed in the system python distribution (I'm using Mac Os 10.9.5), instead of the small subset I wanted to keep inside my virtual environment.
I'm sure I must be doing something wrong in between, but I have no idea how could this happen. Any ideas?
Update:
after looking at this answer, I realized the that when running pip freeze, the pip command that's being invoked is the one at /usr/local/bin/pip instead of the one inside my virtual environment. So the virtual environment is fine, but I wonder what changes in the path might be causing this, and how to prevent them to happen again (my PYTHONPATH variable is not set).
I realized that my problem arose when moving my virtual environment folder around the system. The fix was to modify the activate and pip scripts located inside the venv/bin folder to point to the new venv location, as suggested by this answer. Now my pip freeze is showing the right files.
So I was googling an event where pip required sudo privileges,and I came across the following two threads
What are the risks of running 'sudo pip'?
and
Is it acceptable & safe to run pip install under sudo?
The first thread talks about the security risk of running an unknown .py file with pip (makes sense), but from the second one I almost got the impression that there exists a global and local python installation that you should not mix up. I guess it makes it sense that you can have a global installation for all users and then maybe an appended path to local packages for each user, but is this true? (it would also make sense since ubuntu (which I'm using) has dependencies on certain python packages, so having a global root protected python directory would protect these). However, if this is true, I can't find the two separate directories. I tried
import sys
print(sys.path)
with both sudo and no sudo, and I got the exact same directories.
In any case, I think I'll move to pip virtualenv, but in that case I was wondering, what would happen if I accidentaly forgot to activate the environment and ran an exotic requirements.txt outside? Wouldn't that corrupt my standard user directory I'm trying so hard to keep clean (if that is so, is that revertible? I'm just thinking, it's only forgetting to type one commando, and then your python installation is messed up.)
I would indeed advice to always use virtualenv for requirements specific to a certain application. Tools you use as a developer for multiple projects (something like ipdb) are fine to install globally on the system.
Note that all pip packages are open source, so you have some assurance that famous pip packages are likely not to have malicious code, but could contain security leaks of course.
To prevent accidentally installing a pip package outside a virtualenv, you can add this to your .bashrc:
export PIP_REQUIRE_VIRTUALENV=true
When you then run pip install something outside a virtualenv, it will show an error message:
Could not find an activated virtualenv (required).
If you still want to be able to install pip packages outside a virtualenv, you can add a function in your .bashrc like this:
syspip() {
PIP_REQUIRE_VIRTUALENV="" pip "$#"
}
Then you can run syspip install something to install something globally on your system.
As for the script you are running:
import sys
print(sys.path)
It doesn't matter if you run that with sudo or not, sudo only changes the user privileges you are executing the command with, for this script it doesn't matter.
Running sudo pip install <package> will install the package to the system wide set of packages, typically stored somewhere like /usr/lib/python2.7/site-packages.
Running pip install package without a virtualenv activated will attempt to install the package to the same place, but because (if your system is configured sanely/correctly) you won't have write access to that folder, the install command will fail.
It's generally better to use distribution packages if you can for global installs if you absolutely have to install globally, as then you get the benefit of automatic updates. As you've worked out however, it's far better to not install packages globally at all, and use virtualenvs