Checking folder for discord.py requirements - python

My startup command I have for my discord.py bot installs requirements from a requirements.txt file and puts them into the specified folder (the requirements folder) but the issue I'm having is that the bot doesn't look into that folder for dependencies and just looks at the root folder instead
Is there a way I could make the bot look at a specific folder for dependencies instead of the root one? Because if I were to use the root folder, it'd make it look messy and harder to navigate
Here is the command I use:
pip install -U --target /home/container/requirements -r requirements.txt; fi; /usr/local/bin/python /home/container/bot.py

If I understand correctly, You need to use a virtual environment.
You can do that with may ways venv poetry pipenv.
I recommend you use pipenv all you have to do is install your requirements.txt to the environment and then run your code inside of it.
pipenv install -r requirements.txt

Related

ERROR: Directory is not installable. Neither 'setup.py' nor 'pyproject.toml'

I've got the following error:
ERROR: Directory is not installable. Neither 'setup.py' nor 'pyproject.toml'
Background is that I'm following a guide online to expose an ML model via API Gateway on AWS that can be found here:
Hosting your ML model on AWS Lambdas + API Gateway
I'm trying to pull some python packages into a local folder using the following command:
pip install -r requirements.txt --no-deps --target python/lib/python3.6/site-packages/
I have also tried this:
pip install -r requirements.txt --no-deps -t python/lib/python3.6/site-packages/
and all I get is the above error.
Google is pretty bare when it comes to help with this issue, any ideas?
thanks,
Does this work?
You can create a new folder e.g. lib, and run this command:
pip3 install <your_python_module_name> -t lib/
Would suggest making the path explicit to requirements.txt, e.g. ./requirements.txt if you're running the command in the same directory
Also may need to add a basic setup.py to the folder where you're trying to install. The pip docs mention that this will happen if there's no setup.py file:
When looking at the items to be installed, pip checks what type of
item each is, in the following order:
Project or archive URL.
Local directory (which must contain a
setup.py, or pip will report an error).
Local file (a sdist or wheel
format archive, following the naming conventions for those formats).
A
requirement, as specified in PEP 440.
https://pip.pypa.io/en/stable/cli/pip_install/#argument-handling
Please try this:
ADD requirements.txt ./
pip install -r requirements.txt --no-deps -t python/lib/python3.6/site-packages/
syntax: ADD source destination
'ADD requirements.txt ./' adds requirements.txt (assumed to be at the cwd) to the docker image's './' folder.
Thus creating a layer from which the daemon has the context to the location of requirements.txt in the docker image.
More about it in dockerfile-best-practices
you can change your directory as follow
import os
os.chdir(path)
instead of:
cd path
also try to use:
!pip freeze > requirements.txt
instead of:
pip install -r requirements.txt
then execute your code:
!pip install .
or
!pip install -e .
in conclusion try this:
import os
os.chdir(path)
!pip freeze > requirements.txt
!pip install .

Run a additional uninstall script with setuptools

So, I'm writing a simple command-line Python app that maintains some user data. I thought to implement setuptools to make the build and distribution process easier, but I can't find a good solution to manage the user files.
It is easy enough to create a user directory in /etc/appname, but when a user wants to uninstall my application, presumably with:
sudo pip uninstall appname
It won't remove the /etc/appname directory holding the users. It'll only remove the .egg file.
Is there a way to specifiy in setup.py a script that will run when uninstall is called, so I can manually remove the directories I've created?
Or is there an option to have setuptools create a data directory for the purpose of read and writing to files from the application and not just read-only config files?
I didn't find a proper way to manage dynamic files with setuptools, so I used a Makefile. It creates the directories for the program to reference, then runs setuptools to gather pip packages and install the script to /usr/bin.
install:
mkdir -p ~/.appname/users
sudo python setup.py install
cp -u appname/users/* ~/.appname/users
uninstall:
sudo rm -rf ~/.appname
sudo pip uninstall appname

Install python requirements.txt with Makefile only requirements.txt is changed

How can I run target make install only if requirements.txt is changed ?
I don't want to upgrade packages each time when I do make install
I found some workaround by creating fake file _requirements.txt.pyc but is ugly and dirty. It will refuse install pip requirements second time because requirements.txt has no changes
$ make install-pip-requirements
make: Nothing to be done for 'install-pip-requirements'.
But my goal is to do:
# first time,
$ make install # create virtual environment, install requirements
# second time
$ make install # detected and skipping creating virtual env,
# detect that requirements.txt have no changes
# and skipping installing again all python packages
make: Nothing to be done for 'install'.
Python package looks like:
.
├── Makefile
├── README.rst
├── lambda_handler.py
└── requirements.txt
I am using file, Makefile, for some automation in python:
/opt/virtual_env:
# create virtual env if folder not exists
python -m venv /opt/virtual_env
virtual: /opt/virtual_env
# if requirements.txt is modified than execute pip install
_requirements.txt.pyc: requirements.txt
/opt/virtual_env/bin/pip install -r --upgrade requirements.txt
echo > _requirements.txt.pyc
requirements: SOME MAGIG OR SOME make flags
pip install -r requirements.txt
install-pip-requirements: _requirements.txt.pyc
install: virtual requirements
I am sure that
Must be a better way
to do this;)
Not sure it will answer your question at this point. The better way is to use a fully fledged Python PIP project template.
We use cookiecutter to create a particular pip package with this cookiecutter template.
It has a Makefile, which does not constantly re-install all the dependencies and it makes use of Python tox, which allows running a project tests in different python envs automatically. You still can develop in dev virtualenv, but we update it only when new package is added, everything else is handle by tox.
But, what you show so far is trying to write a Python build from scratch, which was done with numerous project templates. If you really want to understand what is going on there, you can analyze these templates.
As followup: Because you expect it to work with a makefile, I'd suggest removing the --upgrade flag from the pip command. I suspect your requirements do not include versions that are needed for the project to work. We made an experience, that not putting versions there might badly brake things. Thus our requirements.txt looks like:
configure==0.5
falcon==0.3.0
futures==3.0.5
gevent==1.1.1
greenlet==0.4.9
gunicorn==19.4.5
hiredis==0.2.0
python-mimeparse==1.5.2
PyYAML==3.11
redis==2.10.5
six==1.10.0
eventlet==0.18.4
Using the requirements without --upgrade causes pip simply verify what is in virtualenv and what not. Everything that satisfies the required version will be skipped (no download). You can also reference git versions in requirements like that:
-e git+http://some-url-here/path-to/repository.git#branch-name-OR-commit-id#egg=package-name-how-to-appear-in-pip-freeze
#Andrei.Danciuc, make just needs two files to compare; you can use any of the output files from running pip install.
For example, I usually use a "vendored" folder, so I can alias the path to the "vendored" folder instead of using a dummy file.
# Only run install if requirements.txt is newer than vendored folder
vendored-folder := vendored
.PHONY: install
install: $(vendored-folder)
$(vendored-folder): requirements.txt
rm -rf $(vendored-folder)
pip install -r requirements.txt -t $(vendored-folder)
If you don't use a vendored folder, this code below should work for both virtualenv and global setups.
# Only run install if requirements.txt is newer than SITE_PACKAGES location
.PHONY: install
SITE_PACKAGES := $(shell pip show pip | grep '^Location' | cut -f2 -d':')
install: $(SITE_PACKAGES)
$(SITE_PACKAGES): requirements.txt
pip install -r requirements.txt

Pip install from git repository without local files

I am trying to install a python module from a git repository, however it seems that the only way to successfully do this is to add -e to the command which leaves behind local files. When I remove the -e it says it installs, but I am left without any files and it fails to import. Here is my command:
pip install -e git+http://gitlab.valk.net/operations/cilib.git#egg=cilib
Further note, when the -e is left off, there is only an egg-info file in site-packages but not files. Is anyone aware of how you do this?
Thanks

How to install portia, a python application from Github (Mac)

I am attempting to install Portia, a python app from Github: https://github.com/scrapinghub/portia
I use the following steps at the command line:
set up new virtualenv 'portia' in Mac terminal
git clone https://github.com/scrapinghub/portia.git
follow readme instructions:
cd slyd
pip install -r requirements.txt
run Portia
cd slyd
twistd -n slyd
But every time I attempt the last step to run the program, I get the following error:
ImportError: No module named scrapy
Any idea why this error is occurring? All previous steps seem to install correctly. Is it an error earlier in my install process?
Thanks!
I don't have the rep to upvote Alagappan's answer but he's correct. Also, if you're as inexperienced as I am, you may need further clarity on this.
You have to create, activate and navigate into the virtualenv before installing anything (including cloning portia from github). Here's the whole thing working from start to finish:
1: cd to wherever you’d like to store your project...
and Install virtualenv:
$ pip install virtualenv
2: Create the virtual environment. (I called mine “portia” but this can be anything.):
$ virtualenv portia
3: Activate the virtual environment you created (change the path to reflect the name you used here if not “portia”.):
$ source portia/bin/activate
At this point your terminal should have display the virtualenv name in parenthesis before the standard directory path prompt:
 (name-of-virtualenv) [your-machine]:[current-directory]: [user]$
...and if you list the files within your pwd you’ll see the name of you virtualenv there.
4: cd into your virtualenv (“portia” for me):
$ cd portia
5: Now you can clone portia from github into your virtualenv...
$ git clone https://github.com/scrapinghub/portia
6: cd into the cloned portia/slyd...
$ cd portia/slyd
7/8: pip install twisted and Scrapy...
$ pip install twisted
$ pip install Scrapy
You’re virtualenv should still be activated and you should still be in [virtualenv-name]/portia/slyd
9: Install the requirements.txt:
$ pip install -r requirements.txt
10: Run slyd:
$ twistd -n slyd
--- No more scrapy error! ---
Another Installation Method For Portia: Using Vagrant
Here is the method that made me install Portia with ease. Works with Mac, Windows and Linux. With a few commands and clicks, you'll get a fully functional web scraper.
Things Needed:
VirtualBox
Vagrant
Clone the repo for Portia or download the zip file.
Additional Steps To Take:
Install VirtualBox.
Install Vagrant
Open your terminal and navigate to where you cloned the Portia repo or where you've extracted it (in case of a zip file).
Then make a command vagrant up - This will download and setup a VirtualBox Guest VM for you + will install all the necessary requirements for Portia and will install Portia from start to finished.
After the above process, you may now open your browser and navigate to
http://the-virtualbox-ip:8000/static/main.html
And you're setup.
It's quite simple, you just need to install the python module scrapy in the same way that the Twitter API requires setuptools
pip install scrapy
I suppose the issue you are facing is because of the virtualenv. Once you setup a new virtual environment you need to run the activate script in order to start using it. In your case you'll have to run the following command:
$ source portia/bin/activate
On successful activation, your prompt will look like:
(portia) $
Can you check if you activated your virtual environment before you installed the packages using pip? I believe doing so will fix your issue.

Categories

Resources