How to use python library without installing it - python

I'm using AWS for work with Anaconda in it. I don't have permission to install any python library (I mean, I requested it and it's taking ages) so I'm trying to see if there's a way around it.
I'm trying to install xgboost, is it possible to do so without actually installing it? I thought maybe copying and pasting the base code, but it's a bit messy I guess?
Any better recommendations?
I tried using a virtual environment (venv) and it doesn't seem to work... it's not allowing me to create a virtual env, I guess it's because of the permits I said before.
Edit: I basically cannot install anything (I can't even access google actually, just the company approved websites). The first idea, of pasting the base code, I will have to send it over from my personal email to my work email so I can get it there, but it's not very simple either.

Download Package which you want to install
unzip and cd into folder
run python setup.py with --user flag

If you have access to the site-packages directory, you can copy the package directly into there, as you suggested. I guess installing from a wheel also wouldn't work?
You could also just redownload python into a directory you have access to and start from there.
Also, does Anaconda use venv? I've always used conda create to make an anaconda virtual environment.

I have used some modules in python (pexpect) without actually installing the module, but instead having the module file within the same folder where I have the project.
And just calling the import (module) in the code.
i'm not sure if it can work for you, but you should give it a try.

Related

install pycurl in another virtualenv

Here is my situation:
I work on a python project in which we work within a particular, project-oriented-and-customized virtual environment.
However at some point I need to introduce 3rd party modules into this venv, concretely speaking, pycurl and openpyxl. Presently I had to run certain python scripts depending on these two modules from anaconda's venv, which includes those two.
I don't feel like to switch back and forth between these two venvs.
We have a corp firewall stopping direct access to outside code repository. I managed, however, to download pycurl from https://dl.bintray.com/pycurl/pycurl/, after unzip it (BTW, I use Windows), I noticed these two packages
site-packages\curl
site-packages\pycurl-7.43.0-py3.5.egg-info
In my project's directory tree, I found bunches of modules pretty much in line with the naming conventions as above, they are all under:
my_project\PythonVenv\Lib\site-packages
Then I copied curl and pycurl-7.43.0-py3.5.egg-info there and re-activate the project's venv, and tried running the script, still it complains:
ModuleNotFoundError: No module named 'pycurl'
Maybe simply copying doesn't work? Do I need to use something like "python setup.py" or "pip install".
First, I didn't see setup.py coming with the pycurl package; secondly "pip install" not work due to the corp firewall.
Anyone can help? Thanks.
On Windows you should be using the Windows binary packages. It sounds like you are trying to unpack the source distribution.

how to properly check if a module is installed, and if not attempt to install it

I am working in Python 3.6+ and want to check if a few different modules are installed from within my script. If not, I want to attempt to install them with a few caveats:
1) the proper way to do this, if I remember reading it correctly, is to look into 'packaging and versioning' .. possibly with setuptools .. im not really sure. There is a Digital ocean page that is hard to follow. It discusses this but I keep running into an issue with documents around this topic: they are all based around the assumption that the project will be uploaded to pypip for use with pip. I specifically do not want this. I want to distribute this directly to individuals, by hand. Maybe in the future have it available in a closed, not-open-to-everyone github repo.
Currently in my script I'm using a try and except. Try to import these modules, if they don't exist i run this exception which i don't know if it works.
except ImportError:
from pip._internal import main as pip
pip(['install', colorama])
import colorama
print('colorama imported successfully')
and for what its worth - i have no idea what pip(['install', colorama]) is doing.
The packaging aspect seems to include imported modules with your code. How does one preform this function? Instead of checking if colorama is installed and then attempting to launch a subprocess to install it .. how do i just include the entire thing assuming this is the 'right' way to do this?
One thing that's usually done to avoid this problem is to build your program in a virtual environment which you know to contain the correct python scripts - and then either
package the entire virtual environment with your project as a unit, or
write a requirements.txt file that lists all the packages (and versions) that are expected to be installed before the user runs the program (you'd install everything on that list by doing pip install -r requirements.txt on the command line before running the program with python script_name.py)
Ideally, you'd then have your script fail if the required dependencies aren't there, and have the user install them manually to fix the issue.
Here's python 3's documentation on virtual environments
What you're doing now is unconventional - if it's working, it's working, but it's not great practice. Biggest reason for that is that your script, in its current state, is installing software on the user's machine without their consent, i.e. the user did not tell the program to install this software and was not told that the software was necessary, but the program is installing it anyway. In your case this may be harmless, but in general it's something to stay away from because it can get into really shady territory.

Changes in Python scripts are not accepted

I'm new to Python, so I think my question is very fundamental and is asked a few times before but I cannot really find something (maybe because I do not really know how to search for that problem).
I installed a module in Python (reportlab). Now I wanted to modify a python script in that module but it seems that the python interpreter does not notice the updates in the script. Ironically the import is successful although Python actually should not find that package because I deleted it before. Does Python uses something like a Cache or any other storage for the modules? How can I edit modules and use those updated scripts?
From what you are saying, you downloaded a package and installed it using either a local pip or setup.py. When you do so, it copies all the files into your python package directory. So after an install, you can delete the source folder because python is not looking here.
If you want to be able to modify, edit, something and see changes, you have to install it in editable mode. Inside the main folder do:
python setup.py develop
or
pip install -e .
This will create a symbolic link to you python package repository. You will be able to modify sources.
Careful for the changes to be effective, you have to restart your python interpreter. You cannot just import again the module or whatever else.

How to Import Sklearn and Pandas in Server with no pip access and Limited memory

I'm integrating a java project by using maven. This project should be pushed to a server(linux) with limited access(cannot use pip). I've added all the dependencies for my java component adding them to the pom.xml, but In part of my code I've used client/server approach to call a python script, which requires Pandas and Sklearn. Unfortunately, I've memory issue and cannot copy the entire directory of these libraries into server.
I'm afraid if maven could help me to download the python dependencies or if there is another efficient way of adding python dependencies into repository. I've done some research but couldn't find any helpful way to address that. I'm a beginner in python and I'd be happy if you could help me address that.
If limited access (Assuming you can download but cannot install) is your issue you can download Anaconda and it doesn't need to be installed with root access. It install everything to your home directory and creates a virtual environment for you. That way you can use pip too. Just make sure you call your python script with your anaconda virtual environment python.
/home/USER/anaconda2/envs/ml/bin/python script_name.py

Is there a way to "version" my python distribution?

I'm working by myself right now, but am looking at ways to scale my operation.
I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out.
How have other people solved this?
virtualenv + requirements.txt are your friend.
You can create several virtual python installs for your projects, everything containing exactly those library versions you need (Tip: pip freeze spits out a requirements.txt with the exact library versions).
Find a good reference to virtualenv here: http://simononsoftware.com/virtualenv-tutorial/ (it's from this question Comprehensive beginner's virtualenv tutorial?).
Alternatively, if you just want to distribute your code together with libraries, PyInstaller is worth a try. You can package everything together in a static executable - you don't even have to install the software afterwards.
You want to use virtualenv. It lets you create an application(s) specific directory for installed packages. You can also use pip to generate and build a requirements.txt
For the same goal, i.e. having the exact same Python distribution as my colleagues, I tried to create a virtual environment in a network drive, so that everybody of us would be able to use it, without anybody making his local copy.
The idea was to share the same packages installed in a shared folder.
Outcome: Python run so unbearably slow that it could not be used. Also installing a package was very very sluggish.
So it looks there is no other way than using virtualenv and a requirements file. (Even if unfortunately often it does not always work smoothly on Windows and it requires manual installation of some packages and dependencies, at least at this time of writing.)

Categories

Resources