I have a systemd service that regularly reads the first line of a root-owned file, transforms it and then uses png_util:
import png_util
with open('root-owned-file', 'r') as f:
f.read()
...rest of logic...
Now, when the systemd daemon starts, it doesn't have access to the png_util library I installed with pip (pip install png_util) because that only installs it for the installing user. This also happens, when I start the script with sudo:
ModuleNotFoundError: No module named 'png_util'
If I read a file owned by me and execute the script normally as my user, everything works fine.
The systemd service:
[Unit]
Description=PNG
[Service]
ExecStart=/tmp/pngreader
[Install]
WantedBy=multi-user.target
Is the trick simply using pip install --user root and then setting the PYTHONPATH for the root user somehow?
I think you can get what you need with a virtual environment.
You need to create a virtual environment specifically for that script. You will install all the packages you will need for it with the right versions in that environment. As long as you run your script with that virtual environment active everything will be available.- See the venv documenttion here
To create a virtual environment you run python3 -m venv <your_venv_path> with path being where you want to store it, e.g. ~/.venvs/my_project/
To install packages you first have to make it active and then run pip
source <your_venv_path>/bin/activate
pip install png_util
To here you would have your virtual environment ready and your package installed. If you run your script with your virtual environment active the package will be available.
Now, because your script is a daemon this is how you make sure it runs within your virtual environment. Basically the virtual environment creates a copy of Python in and you just add to your script the instruction to use that "copy" of python. You do it by just adding #!<your_venv_path>/bin/python as the first line of your script.
That way when your script runs it does run within that virtual environment where all the packages are installed.
PS: Potentially everything could work by simply running pip as sudo because it will install the package system wide making it available for all users. But that option is highly discouraged for the security risks it creates, see this post with security risks of running sudo pip
Hope this helps!!
Related
I created an application using py2app and have completed it. Running it on my machine has no problem, but I am concerned it might have problems such as missing module or other errors when run on someone else's with no programs installed. Is there a way to test this?
(sorry, I'm sure this is on the Internet but I'm not sure how to search for it)
you really can address your concern of module not found and ... on your current system by testing the app in an isolated virtual environment.
you can create a virtual environment, install the necessary package in it and see if it works
to do so:
I think the fastest easiest and most comfortable way is to use miniconda
to create a backup of the current ( or a fresh working virtual environment) and re-create one on another (or same) machine.
miniconda is an environment managing tool, a super lightweight version of conda (only ~60MB)
Just install miniconda from the instructions here.
Then create a new environment as below:
conda create --name newtestevironemnet python=3.9
So, the command will create an environment with only python. and then you can test your app see what no module error you will get and install packages.
whenever you reach a point where everything is working you can export necessary packages by :
conda env export > environment.yml
you can also create a requirement.txt file and tell the user to do pip install -r requirements.txt before running the app by:
conda list -e > requirements.txt
Overall, pip is working fine on my server. I have just installed the package waitress and the installation seems successful. I checked it with pip freeze:
$ pip freeze | grep waitress
waitress==2.1.0
Waitress also can be imported via python3:
>>> import waitress
>>>
However, waitress-serve cannot be executed:
$ waitress-serve
Command 'waitress-serve' not found, but can be installed with:
apt install python3-waitress
Please ask your administrator.
I am not a root user on this server. Could this be a reason why the package was installed partially, or am I speculating here?
Since I am not authorized to run apt install and since the simple pip install worked in my virtualenv, I would like to be able to get this to work without using the suggested apt install python3-waitress command.
The conclusion is that, while it is installed both inside and outside virtualenv, it is only actually executable inside it.
When installable Python packages give you an actual entry point, generally it will not be on your path.
When you use a virtual environment, activating the environment puts various parts of that environment onto the path temporarily. This is intended to ensure that commands like python or python3 run the environment's Python, but it also allows those entry points to be found on the path.
A system Python installation (here I mean, not just a Python that comes with your operating system, but also one that you install manually after the fact - but not a virtual environment) will generally not have its library folders on the path by default - only enough to make python and pip work. (On Windows, often even these are not added to the path; instead, a program py is placed in the Windows installation folder, and it does the work of looking for Python executables.) Even if you are allowed to install things directly into a system Python (and you should normally not do this if you can avoid it, even if you're allowed to), they won't be findable there.
Of course, you could execute these things just fine by explicitly specifying their paths. However, the normally correct approach is to just ensure that, when you want to run the program, the same virtual environment is activated into which you installed the package.
(On my system, I have one main "sandbox" virtual environment that I use for all my projects - unless I am specifically testing the installation process, or testing how the code works on a different version of Python. Then I use a wrapper script to open a terminal window, navigate to a folder that contains all my projects, and activate the environment.)
if waitress is installed inside a virtual environment, I think you may have accidentally (or not) gotten out of said virtual environment.
If you are running a virtual environment, you can try the following commands, one after the other:
source venv/bin/activate #venv is assumed to be the name of the virtual environment you are using.
pip install waitress
waitress-serve
anytime you need to use waitress you will need to activate the virtual environment yet again:
source venv/bin/activate
waitress-serve
Please take note I am assuming you are running in a Linux environment
If this is not the issue you are facing, then feel free to expaund a bit more on your question.
Edit: Installing with pip and running it worked perfectly on my virtualenv; see the picture below, running on python 3.8.10
I'm on an OS Catalina and I'm trying to install and run Mephisto (see https://github.com/facebookresearch/mephisto/blob/master/docs/quickstart.md). I created a python3 virtual environment and then went to the directory and ran
sudo pip3 install -e .
This seems to have run fine as I can now run mephisto and see the list of commands and options. However when I run mephisto register mturk it throws No module named 'mephisto.core.argparse_parser' because of an import statement in the python file. This seems like a general issue of a module installing but not importing the module properly, but would appreciate help in how to fix it. Is it because my $PYTHONPATH is currently empty?
Mephisto Lead here! This seems to have been a case of unfortunate timing, as we were in the midst of a refactor there and some code got pushed to master that should've received more scrutiny. We'll be moving to stable releases via PyPI in the near future to prevent things like this!
I created a python3 virtual environment and then went to the directory and ran
sudo pip3 install -e .
You should not have used sudo to install this library, if you meant to install it in a virtual environment. By using sudo the library probably got installed in the global environment (not in the virtual environment).
Typically:
create a virtual environment:
python3 -m venv path/to/venv
install tools and libraries in this environment with:
path/to/venv/bin/python -m pip install Mephisto
use python in the virtual environment:
path/to/venv/bin/python -c 'import mephisto'
use a tool in the virtual environment:
path/to/venv/bin/mephisto
Is it because my $PYTHONPATH is currently empty?
Forget PYTHONPATH. Basically one should never have to modify this environment variable (this is almost always ill-informed advice to get PYTHONPATH involved).
Check the __init__.py file is in the module's file directory. If not try creating an empty one.
I am using Virtualenv to learn Python. The author of the book I am reading wants no system wide access of Python available during learning, so we created a virtual environment via virtualenv. This is not built-in Python 3 virtual environment functionality, it is the pip virtualenv. It's an issue for me because I cannot figure out how to run a script while inside the virtualenv. Virtualenv's documentation reads that activation (or path naming) isn't required when running from within the virtual environment's directory and although I have moved my file both there and within the Scripts directory, I cannot run it while inside the virtualenv environment. Any help? I am using Python 3.6.1. The code I'm trying to run is:
def local():
m=7
print(m)
m=5
print(m)
I realize it's not even training wheel code, but what I'm trying to ultimately do is be able to run code from within the virtual environment to follow as the book suggests. I'm also using a fully updated Windows 10 OS.
What happens when I run the script is this:
(.virtualenv) c:\users\aiii> cd c:\users\aiii\desktop\learning.python\.virtualenv
(.virtualenv) c:\users\aiii\desktop\learning.python\.lpvenv>scopes1.py
'scopes1.py' is not recognized as an internal or external command, operable program or batch file.
(.virtualenv) c:\users\aiii\desktop\learning.python\.lpvenv>python scopes1.py
python: can't open file 'scopes1.py': [Errno 2] No such file or directory.
(.virtualenv) c:\users\aiii\desktop\learning.python\.lpvenv>
I have placed the script both directly in the learning.python folder where the environments are contained c:\users\aiii\desktop\learning.python\.lpvenv and inside the .lpvenv folder in the Scripts folder since that is where other scripts run from within the virtualenv pip are at c:\users\aiii\Desktop\learning.python\.lpvenv\Scripts\
First, install Virtualenv:
sudo apt-get install python-virtualenv
Then Create Virtualenv:
virtualenv venv #venv is name
For activating virtualenv.First, move to folder, In which you want to enable and run this command:
source venv/bin/activate
Once, Your work is done then disable virtualenv:
deactivate
Is there any easy way to export the libs my script needs so that I can put all of the files into a git repo and run the script from Jenkins without the need of installing anything?
context:
remote Jenkins without some python libs (RO - no access to terminal)
need to run my script that needs external libs such as paramiko, requests, etc
I have tried freeze.py but it fails at make stage
I have found some articles here regarding freeze.py, p2exe, p2app, but none of those helped me.
You can use a virtual environment to install your required python dependencies in the workspace. In short, this sets up a local version of python and pip for which you can install packages without affecting the system installation. Using virtual environments is also a great way to ensure dependencies from one job do not impact other jobs. This solution does require pip and virtualenv to be installed on the build machine.
Your build step should do something like:
virtualenv venv
. venv/bin/activate
pip install -r requirements.txt
# ... perform build, tests ...
If you separate your build into several steps, the environment variables set in the activate script will not be available in subsequent steps. You will need to either source the activate script in each step, or adjust the PATH (e.g. via EnvInject) so that the virtualenv python is run.