need some help to connect my python code, which runs in a virtual environment (AWS ec2) with S3 on AWS.
i already connect the instance via IAM - that works. it is also possible to run the code in my pycharm environment. but if i run the code on my ec2 the error is: NO module name boto3! But i install the module with requirements.text. i run the code i a shell
awscli==1.18.222
fsspec==0.8.5
s3fs==0.5.2
boto3==1.16.51
boto3-stubs==1.16.59.0
botocore==1.17.44
s3ts==0.1.0
think that's more than necessary.
#!/bin/sh
cd ~/code/namexy
git pull
pip3 install virtualenv
virtualenv -p python3 venv
(
source venv/bin/activate
pip3 install -r requirements.txt
python main.py
)
git add *
git commit -m "AWS ec2: data_main"
git push origin main
ok the problem "may" was, that installed a package which removed boto3 package (botocore). now my code looks like this and runs!
#!/bin/sh
cd ~/code/namexy
git pull
rm -rf venv
mkdir venv
pip3 install --user virtualenv
virtualenv -p /usr/bin/python3 venv/python3
source venv/python3/bin/activate
pip3 install -r requirements.txt
pip3 freeze
python3 main.py
deactivate
git add *
git commit -m "AWS ec2: "main"
git push origin main
Related
I have an app that I would like to deploy to AWS Lambda and for this reason it has to have Python 3.9.
I have the following in the pyproject.toml:
name = "app"
readme = "README.md"
requires-python = "<=3.9"
version = "0.5.4"
If I try to pip install all the dependencies I get the following error:
ERROR: Package 'app' requires a different Python: 3.11.1 not in '<=3.9'
Is there a way to specify the Python version for this module?
I see there is a lot of confusion about this. I simply want to specify 3.9 "globally" for my build. So when I build the layer for the lambda with the following command it runs:
pip install . -t pyhon/
Right now it has only Python 3.11 packaged. For example:
❯ ls -larth python/ | grep sip
siphash24.cpython-311-darwin.so
When I try to use the layer created this way it fails to load the required library.
There are multiple ways of solving this.
Option 1 (using pip's built in facilities to restrict Python version)
pip install . \
--python-version "3.9" \
--platform "manylinux2010" \
--only-binary=:all: -t python/
Another way of solving this is with Docker:
FROM python:3.9.16-bullseye
RUN useradd -m -u 5000 app || :
RUN mkdir -p /opt/app
RUN chown app /opt/app
USER app
WORKDIR /opt/app
RUN python -m venv venv
ENV PATH="/opt/app/venv/bin:$PATH"
RUN pip install pip --upgrade
RUN mkdir app
RUN touch app/__init__.py
COPY pyproject.toml README.md ./
RUN pip install . -t python/
This way there is no change to create a layer for AWS Lambda that is newer than Python 3.9.
I'm learning Python and have Anaconda installed, and I'm trying to familiarize myself with the process of getting an Eye-Color Detection project working.
I'm running into the following error after going through readme:
Eye-Color-Detection git:(master) ✗ sudo pip install -r requirements.txt
WARNING: The directory '/Users/{user}/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
When trying to update:
(tweet) ➜ Eye-Color-Detection git:(master) ✗ conda update --all
[Errno 13] Permission denied: '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py' -> '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py.c~'
Q: How might I go about doing this correctly within the same Conda environment?
Most of the time, a sudo pip install is almost never what you really want. While in some cases, it may "appear" to work and solve you're immediate problem. More often than not you've just broken your system python without knowing it.
In the context of that repo, I'd ignore the repo's README and do this.
$ git clone https://github.com/ghimiredhikura/Eye-Color-Detection
$ cd Eye-Color-Detection
Create a virtualenv environment, change yourenvname as you like.
$ conda create -n yourenvname python=3.x
$ conda activate yourenvname
Install the dependencies and run the code
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=ima
As fixing you conda environment may be difficult to debug depending on what else you've sudo'd in attempting to resolve your issue. If you happen to be familiar with "regular" virtualenv's created using python's builtin virtual environment tooling, then you could also try this to get you going.
$ python3 -m venv .venv --copies
$ source .venv/bin/active
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=image
What you need to do is you have to change the directory premission to writable.
You can do it using this command,
$ sudo chmod 7777 /Users/{user}/library/caches/
to change permissions recursively,
$ sudo chmod 7777 -R /Users/{user}/library/caches/
or you can own that directory by using this command,
$ sudo chown OWNER:GROUP /Users/{user}/library/caches/
where OWNER is the username for your computer which you can find in the terminal by using this command.
$ whoami
GROUP is optional.
I'm deploying an app to parse pdfs and return their highlighted content. After submitting my build and deploying it on cloud run, I ran into this error:
ModuleNotFoundError: No module named 'popplerqt5'
I previously ran into this error when running it python3 virtualenv on my local machine. However, I resolved it by running
/usr/bin/python3 main.py
instead of
python3 main.py
Currently I am running the app from my Dockerfile and am hence unable to pull of the same method. This is my Dockerfile configuration.
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN virtualenv -p python3 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
RUN apt-get install -y python3-poppler-qt5
ADD requirements.txt /app/requirements.txt
RUN pip install Flask gunicorn
RUN pip install -r /app/requirements.txt
# Add the application source code.
ADD . /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
CMD gunicorn -b :$PORT main:app
How do I get about this error?
I'm fairly new to Sagemaker and Docker.I am trying to train my own custom object detection algorithm in Sagemaker using an ECS container. I'm using this repo's files:
https://github.com/svpino/tensorflow-object-detection-sagemaker
I've followed the instructions exactly, and I'm able to run the image in a container perfectly fine on my local machine. But when I push the image to ECS to run in Sagemaker, I get the following message in Cloudwatch:
I understand that for some reason, when deployed to ECS suddenly the image can't find python. At the top of my training script is the text #!/usr/bin/env python. I've tried to run the *which python * command and changed up text to point to #!/usr/local/bin python, but I just get additional errors. I don't understand why this image would work on my local (tested with both docker on windows and docker CE for WSL). Here's a snippet of the docker file:
ARG ARCHITECTURE=1.15.0-gpu
FROM tensorflow/tensorflow:${ARCHITECTURE}-py3
RUN apt-get update && apt-get install -y --no-install-recommends \
wget zip unzip git ca-certificates curl nginx python
# We need to install Protocol Buffers (Protobuf). Protobuf is Google's language and platform-neutral,
# extensible mechanism for serializing structured data. To make sure you are using the most updated code,
# replace the linked release below with the latest version available on the Git repository.
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.10.1/protoc-3.10.1-linux-x86_64.zip
RUN unzip protoc-3.10.1-linux-x86_64.zip -d protoc3
RUN mv protoc3/bin/* /usr/local/bin/
RUN mv protoc3/include/* /usr/local/include/
# Let's add the folder that we are going to be using to install all of our machine learning-related code
# to the PATH. This is the folder used by SageMaker to find and run our code.
ENV PATH="/opt/ml/code:${PATH}"
RUN mkdir -p /opt/ml/code
WORKDIR /opt/ml/code
RUN pip install --upgrade pip
RUN pip install cython
RUN pip install contextlib2
RUN pip install pillow
RUN pip install lxml
RUN pip install matplotlib
RUN pip install flask
RUN pip install gevent
RUN pip install gunicorn
RUN pip install pycocotools
# Let's now download Tensorflow from the official Git repository and install Tensorflow Slim from
# its folder.
RUN git clone https://github.com/tensorflow/models/ tensorflow-models
RUN pip install -e tensorflow-models/research/slim
# We can now install the Object Detection API, also part of the Tensorflow repository. We are going to change
# the working directory for a minute so we can do this easily.
WORKDIR /opt/ml/code/tensorflow-models/research
RUN protoc object_detection/protos/*.proto --python_out=.
RUN python setup.py build
RUN python setup.py install
# If you are interested in using COCO evaluation metrics, you can tun the following commands to add the
# necessary resources to your Tensorflow installation.
RUN git clone https://github.com/cocodataset/cocoapi.git
WORKDIR /opt/ml/code/tensorflow-models/research/cocoapi/PythonAPI
RUN make
RUN cp -r pycocotools /opt/ml/code/tensorflow-models/research/
# Let's put the working directory back to where it needs to be, copy all of our code, and update the PYTHONPATH
# to include the newly installed Tensorflow libraries.
WORKDIR /opt/ml/code
COPY /code /opt/ml/code
ENV PYTHONPATH=${PYTHONPATH}:tensorflow-models/research:tensorflow-models/research/slim:tensorflow-models/research/object_detection
RUN chmod +x /opt/ml/code/train
CMD ["/bin/bash","-c","chmod +x /opt/ml/code/train && /opt/ml/code/train"]
I pushed my virtualenv with Django project to github repository. I have found information that it wasn't the best solution. I noticed that community suggest adding file requirements.txt with pip freeze instead of virtualenv to github repository. So I would like to delete virtualenv and add such file. I wonder how can I do this?
Use
$ git rm <your-virtualenv>
$ git status
$ git commit -m "commit-message"
$ git push
Now to disable these files to show up on $ git status create a .gitignore file.
or simply run below command from your project root:
$ echo "<your-virtualenv>" >> .gitignore
if your code is used from multiple device then push .gitignore.
(env) $ git rm -r env
(env) $ pip freeze > requirements.txt
(env) $ git add requirements.txt
(env) $ git commit -m "Removed env dir. Added requirements.txt"
(env) $ git push origin master