module not found after cloning git repository - python

I am trying to use this: https://github.com/google-research/google-research/tree/master/rouge
I've tried cloning the git repository in the ubuntu for windows terminal, by doing
git clone https://github.com/google-research/google-research.git
And the cloning seems to go fine.
But when I try to use this script:
python3 -m rouge.rouge \
--target_filepattern="./en_test_CN.txt" \
--prediction_filepattern="./en_pred_CN.txt" \
--output_filename="./en_scores.csv" \
--use_stemmer=true \
--split_summaries=true
It says that there is no module named rouge.rouge:
/usr/bin/python3: No module named rouge.rouge
I'm really new in this and I don't know why it's not working.

As Tim Roberts suggested, the README specifies running that command from google-research root.
Additionally, you should install the package's dependencies.
I've managed to get it running with the following steps:
git clone https://github.com/google-research/google-research.git --depth=1
cd google-research
pip install -r rouge/requirements.txt
python -m rouge.rouge --help

Related

How to build a Python project for a specific version of Python?

I have an app that I would like to deploy to AWS Lambda and for this reason it has to have Python 3.9.
I have the following in the pyproject.toml:
name = "app"
readme = "README.md"
requires-python = "<=3.9"
version = "0.5.4"
If I try to pip install all the dependencies I get the following error:
ERROR: Package 'app' requires a different Python: 3.11.1 not in '<=3.9'
Is there a way to specify the Python version for this module?
I see there is a lot of confusion about this. I simply want to specify 3.9 "globally" for my build. So when I build the layer for the lambda with the following command it runs:
pip install . -t pyhon/
Right now it has only Python 3.11 packaged. For example:
❯ ls -larth python/ | grep sip
siphash24.cpython-311-darwin.so
When I try to use the layer created this way it fails to load the required library.
There are multiple ways of solving this.
Option 1 (using pip's built in facilities to restrict Python version)
pip install . \
--python-version "3.9" \
--platform "manylinux2010" \
--only-binary=:all: -t python/
Another way of solving this is with Docker:
FROM python:3.9.16-bullseye
RUN useradd -m -u 5000 app || :
RUN mkdir -p /opt/app
RUN chown app /opt/app
USER app
WORKDIR /opt/app
RUN python -m venv venv
ENV PATH="/opt/app/venv/bin:$PATH"
RUN pip install pip --upgrade
RUN mkdir app
RUN touch app/__init__.py
COPY pyproject.toml README.md ./
RUN pip install . -t python/
This way there is no change to create a layer for AWS Lambda that is newer than Python 3.9.

How can I install the FeniCS dolfin module?

So I'm trying to install FEniCS from the instructions here. I did the
pip3 install fenics-ffc --upgrade
inside my virtualenv and it worked but when I try to import dolfin I get a ModuleNotFound error. I'm not sure how to get dolfin installed. I did
pip install pybind11
to get pybind11 installed then copied the code for dolfin installation into my cmd
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.__version__)")
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr
mkdir dolfin/build && cd dolfin/build && cmake .. && make install && cd ../..
mkdir mshr/build && cd mshr/build && cmake .. && make install && cd ../..
cd dolfin/python && pip3 install . && cd ../..
cd mshr/python && pip3 install . && cd ../..
but it just spat out dozens of errors like:
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.version)") 'FENICS_VERSION' is not recognized as an internal or external command, operable program or batch file.
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin Cloning into 'dolfin'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr Cloning into 'mshr'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
There were lots more errors after too. Am I not supposed to paste the dolfin code into cmd? I don't know much about this stuff so unsure of how to get the dolfin module. I've previously only used pip to get my packages but this does not work for dolfin as it doesn't appear to be on PyPI.
Do you have cmake? It says in the docs you need it. Also its says to to this to install pybind11 not pip install pybind11
For building optional Python interface of DOLFIN and mshr, pybind11 is needed since version 2018.1.0. To install it:
wget -nc --quiet https://github.com/pybind/pybind11/archive/v${PYBIND11_VERSION}.tar.gz
tar -xf v${PYBIND11_VERSION}.tar.gz && cd pybind11-${PYBIND11_VERSION}
mkdir build && cd build && cmake -DPYBIND11_TEST=off .. && make install
Also what is your os?
So here is how you can install fenics 2019.1 using conda (miniconda):
Install Conda:
First go to https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
and follow the installation instructions.
Create a conda environment for fenics:
Open a terminal and type:
conda create -n fenics
To activate the created environment "fenics", type:
conda activate fenics
If you want the fenics environment to be activated automatically everytime you open a new terminal, then open you .bashrc file (should be under /home/username/.bashrc) and add the line "source activate fenics" below the ">>> conda initialize >>>" block.
Install fenics:
Type all these commands:
conda install -c conda-forge h5py=*=*mpich*
conda install -c conda-forge fenics
pip install meshio
pip install matplotlib
pip install --upgrade gmsh
conda install -c conda-forge paraview
pip install scipy
The second command will take a while. I added a few nice to have programs like gmsh and paraview which will help you to create meshes and view your solutions.

colour methods not accessible after pip install of colour-science

Executed the following commands:
!pip install -q colour-science
!pip install -q matplotlib
# Uncomment the following lines for the latest develop branch content.
!pip uninstall -y colour-science
!if ! [ -d "colour" ]; then git clone https://github.com/colour-science/colour; fi
!if [ -d "colour" ]; then cd colour && git fetch && git checkout develop && git pull && cd ..; fi
import sys
sys.path.append('colour')
import colour works. However, when I call methods, it can no longer be found. Examples of such error:
ModuleNotFoundError: No module named 'colour.plotting'; 'colour' is not a package
ModuleNotFoundError: No module named 'colour.utilities'; 'colour' is not a package
This method to install Colour is that we use for our Google Colab environments and should not be used on personal machines. My guess is that you did a pip install colour instead of pip install colour-science.
You should instead create a Virtual Environment and install everything there. We use poetry but assuming you have virtualenv only:
virtualenv oklabtests
source oklabtests/bin/activate
pip3 install git+https://github.com/colour-science/colour#develop
Also worth noting that the main repo is available at https://github.com/colour-science/colour.
Try installing this & ten check
pip3 install git+https://github.com/crowsonkb/colour#JMh
If you still get error please let me know

how to connect python code in venv with s3

need some help to connect my python code, which runs in a virtual environment (AWS ec2) with S3 on AWS.
i already connect the instance via IAM - that works. it is also possible to run the code in my pycharm environment. but if i run the code on my ec2 the error is: NO module name boto3! But i install the module with requirements.text. i run the code i a shell
awscli==1.18.222
fsspec==0.8.5
s3fs==0.5.2
boto3==1.16.51
boto3-stubs==1.16.59.0
botocore==1.17.44
s3ts==0.1.0
think that's more than necessary.
#!/bin/sh
cd ~/code/namexy
git pull
pip3 install virtualenv
virtualenv -p python3 venv
(
source venv/bin/activate
pip3 install -r requirements.txt
python main.py
)
git add *
git commit -m "AWS ec2: data_main"
git push origin main
ok the problem "may" was, that installed a package which removed boto3 package (botocore). now my code looks like this and runs!
#!/bin/sh
cd ~/code/namexy
git pull
rm -rf venv
mkdir venv
pip3 install --user virtualenv
virtualenv -p /usr/bin/python3 venv/python3
source venv/python3/bin/activate
pip3 install -r requirements.txt
pip3 freeze
python3 main.py
deactivate
git add *
git commit -m "AWS ec2: "main"
git push origin main

How to install sqlite3 for python3.7 in seperate directory on linux without sudo commands?

I have the problem that when I run my code on a linux server I get:
ModuleNotFoundError: No module named '_sqlite3'
So after researching, I found out sqlite3 was supposed to have been installed when I installed python, however it didn't.
I think the problem comes from the way I installed python. Since I do not have sudo permissions, I installed python3.7 in a local directory using: This guide.
All solutions to this sqlite3 problem that I can find requires sudo commands.
Is there another way that I can install python3.7 together with sqlite3 in my local Linux directory without using any sudo commands?
I hope I have stated my question clearly and I would appreciate all the help I can get. Thank you!
While installing a python package in a Linux system without "sudo" privileges you can use
For Python 3
pip3 install --user pysqlite3
You can install any third party packages with the same method
pip3 install --user PACKAGE_NAME
The --user flag to pip install tells Pip to install packages in some specific directories within your home directory. For more information click here.
Hope it helps !
The solution is to first build sqlite3 into a user directory and then build python using that directory's libraries and include headers. In particular, #Ski has answered a similar question regarding python 2, which can be adopted to python 3:
$ mkdir -p ~/applications/src
$ cd ~/applications/src
$ # Download and build sqlite 3 (you might want to get a newer version)
$ wget http://www.sqlite.org/sqlite-autoconf-3070900.tar.gz
$ tar xvvf sqlite-autoconf-3070900.tar.gz
$ cd sqlite-autoconf-3070900
$ ./configure --prefix=~/applications
$ make
$ make install
$ # Now download and build python 2, same works for python 3
$ cd ~/applications/src
$ wget http://www.python.org/ftp/python/2.5.2/Python-2.5.2.tgz
$ tar xvvf Python-2.5.2.tgz
$ cd Python-2.5.2
$ ./configure --prefix=~/applications
$ make
$ make install
$ ~/applications/bin/python
Alternatively, if you already have to specify a different --prefix for some reason (this has happened to me with pyenv), use LDFLAGS and CPPFLAGS when configuring python build:
$ ./configure LDFLAGS=-L/home/user/applications/lib/ CPPFLAGS=-I/home/user/applications/include/

Categories

Resources