I am developing a Python library and I need to make it available from GCP Notebook.
Is it possible? How?
Details:
I use Pipenv to manage my library dependencies. Currently my library source code exists in local and in a private git repository. So it is not in PyPI.
My code has multiple module files in nested directories.
My library's dependencies exist in PyPI.
Using Pipenv, the dependencies are described in Pipefile.
This is the type of my Jupyter VM instance : https://cloud.google.com/deep-learning-vm
And this is some interesting structure I could find using SSH from Google console :
$ ls /opt/deeplearning/
bin binaries deps jupyter metadata proxy-agent-config.json restriction src workspace
I envisage to install my library (using pip or something else) to be able to import its modules from the notebooks.
I need that all the dependencies of my library to be installed when installing the library.
If the Python Packages Index is public, I don't want to publish my library in it being proprietary.
Thank you.
What I understood from your question is: you are writing your own python module, which depends on many third-part python packages (can be installed with pip).
In this situation, I would probably do a pip freeze on the actual environment where the module loads everything perfectly.
pip freeze > requirements.txt (It will create a requirements.txt file with all the dependency modules/libraries)
Now, once in the jupyter notebook, you can use the following command to first install all the requirements.
(Run the following in the notebook code cell)
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install -r requirements.txt
Related
I would like to easily export one Python project from one PC to other. When I created the project, I used a virtual environment in order to avoid problems with different package versions.
What I did was to just copy the project folder and paste it in the destination PC. Once I opened the project with Pycharm, I activated the virtual environment with project_path/venv/Scripts/activate, but when I tried to execute any Script, it said it didnĀ“t find the modules.
Which is the workflow I should follow in order to create projects and be able to run them from multiple PC-s without needing to install all the dependencies?
Since you did not specify your Python version I will provide a solution working for both Python 2.x and 3.x.
My suggestion is to create a requirements.txt file containing all your requirements.
This file can be easily prepared using the output from the command:
pip freeze
Then you can paste the output in your requirements.txt file and when you are going to install your Python code on another PC you can simply:
pip install -r requirements.txt
To install your requirements again.
Depending on your project it could be possible, for example, to create a single EXE file (if you are using Windows machines) but more detailed is needed if this is the case.
In case you are using Python 3 the method that is at the moment arguably more popular in the Python community is Pipenv.
Here's its relevant documentation.
And here you can read a simple example of a workflow.
if you are using python3 then use pipenv. It will automatically create Pipfile and Pipfile.lock. That will insure reinstalling dependencies on different machine will have the same packages.
basic and helpful commands:
pipenv shell # activate virutalenv
pipenv install # will install dependencies in Pipfile
pipenv install requests # will install requests lib. and will auto update Pipfile and Pipfile.lock
I'm writing a plugin for an app, and found that packages I install with pip are apparently being ignored, so I'm trying to install them within my plugin's directory using
pip install --ignore-installed --install-option="--prefix=[plugins-folder]" [package-name]
This creates a folder structure lib/python2.7/site-packages/[dependencies] which is fine by me, except I can't figure out how to them import those dependencies. Even if I manage to import the main package, it will break because it can't find it's own dependencies which are also in that same directory structure.
I suggest that you should use virtual python environment. virtualenv for python2/python3 and python3 -m venv for python3.
You python environment seems orderless and you should isolate each environment for each python app.
I'm working on packaging up a suite of tools that can be installed in different environments, and I've run into many problems with dependencies, which are an issue since this package will be installed in air-gapped environments.
The package will be installed via Anaconda, and I have provided the installation script. In order to create the package, I ran the following command:
conda metapackage toolkit_bundle 0.0.1 --dependencies r-essentials tensorflow gensim spacy r-ggplot2 r-plotly r-dplyr r-rjson r-tm r-reshape2 r-shiny r-sparklyr r-slam r-nlp r-cluster r-ggvis r-plyr r-tidyr r-zoo r-magrittr r-xtable r-htmlwidgets r-formattable r-highcharter --summary "Toolkit Bundle"
This produced a tar.bzip2 file that I held on to and tried to install via the conda command
conda install toolkit_bundle.tar.bz2
The command seemed to run successfully, but I was unsuccessful in importing the modules in Python. I also tried creating a virtual conda environment and importing the package.
conda create -n myenv toolkit_bundle-0.0.1.tar.bz2
There was no error, but none of the modules were able to be imported either.
Am I missing a step in this process, or is my thought process flawed?
Update:
It looks like my thinking was pretty flawed. A quick skim of the conda metapackage command documentation revealed the following:
Tool for building conda metapackages. A metapackage is a package with no files, only metadata. They are typically used to collect several packages together into a single package via dependencies.
So my initial understanding was incorrect, and the package only contains metadata. Are there any other ideas for creating packages with dependencies resolved that can be installed in an air gapped environment?
I think you want to look at the command conda build for making packages, which just requires writing an appropriate meta.yaml file containing the dependencies, along with some other build parameters. There is good documentation for doing so on the conda website: https://conda.io/docs/user-guide/tasks/build-packages and there is a repo of examples.
If you have a working PIP package, you can also auto-generate a conda package recipe using conda skeleton.
Once you have built a set of packages locally, you can use the --use-local option to conda install to install from your local repo, with no need for an internet connection (as long as the packages for all the dependencies are in your local repo).
I was able to download the packages I needed via the pypi website, and after determining the dependencies, I manually downloaded them and wrote a script to install them in the required order.
I will format my pc and i would like to somehow collect all the python modules that i have currently and package them (zip or rar etc) / or create an index file of them, so that when i'm done formatting the pc i can reinstall them all in one go, either by using the package/or by using the index created to pip install them all in a batch.
Is there any python module that allows to do that?
Use pip
pip freeze > requirements.txt
This will save the names of all your installed python modules to a file called requirements.txt.
Then when you want to install them again run the following command.
pip install -r requirements.txt
Using a package manager like this is good practice to get into, especially if you use a code repository, so you dont upload all the dependencies to the repo.
If you are not already doing so, its a good idea to use a virtual environment for your python projects.
This will create a unique python environment for each of your projects, keeping each project self contained.
Yes - pip. Using pip freeze will give you the list of all the installed modules. Next all you need to do is to install all that modules by running pip install -r your_file_with_modules_list.
I am a Python programmer, new to IPython Notebook. I have started a notebook that uses numpy.
If I was publishing the same code as a standalone Python script, I'd include a requirements.txt file with numpy and its version, so that other users can install the same version into a virtualenv when they run the script.
What is the equivalent of this for iPython Notebook? I can't find anything about managing requirements in the documentation, only about the dependencies required to install IPython itself.
My requirements are that I'd like to make sure that the notebook is using a particular version of numpy, and I want to make sure that I can publish the notebook with the same version specified.
If you have Jupyter in your requirements.txt and you activate that environment (I recommend Virtualenv), install, and run Jupyter, you'll have all the specific versions you want. So:
python3 -m venv venv
source venv/bin/activate (different command on Windows)
pip install -r requirements.txt
jupyter lab (or jupyter notebook)
You can do this by importing numpy and then checking numpy.__version__. Related question here.