I'm having issues importing certain modules into GoogleColab after cloning them from a Github repo.
Does anyone have any idea what the problem is and how to solve it?
After connecting my GoogleDrive with
from google.colab import drive
drive.mount('/content/drive')
and cloning the github repo
! git clone https://github.com/naver/dope.git
it appears in my colab data structure.
Colab Data Structure
However, when running the actual code, I cannot import the Github modules.
import sys, os
import argparse
import os.path as osp
from PIL import Image
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
from torchvision.transforms import ToTensor
#_thisdir = osp.realpath(osp.dirname(__file__))
from dope.model import dope_resnet50, num_joints
import dope.postprocess as postprocess
import dope.visu as visu
def dope_test(imagename, modelname, postprocessing='ppi'):
if postprocessing=='ppi':
sys.path.append('/content/lcrnet-v2-improved-ppi')# _thisdir+'/lcrnet-v2-improved-ppi/')
try:
from lcr_net_ppi_improved import LCRNet_PPI_improved
except ModuleNotFoundError:
raise Exception('To use the pose proposals integration (ppi) as postprocessing, please follow the readme instruction by cloning our modified version of LCRNet_v2.0 here. Alternatively, you can use --postprocess nms without any installation, with a slight decrease of performance.')
It says
Import "dope.model" could not be resolved(reportMissingImports)
Simply git cloneing a library is not enough to make Python recognize it; you need to add its location to PYTHONPATH, which tells your Python interpreter where to search modules for.
Let's say you have cloned the dope module under /content directory (as the attached picture suggests).
In this case, add /content to sys.path before importing dope-related stuff.
import sys
sys.path.append('/content')
Of course, this will make Python search for other directories that resides in /content, such as lcrnet-v2-improved-ppi and models. To prevent this from happening, just create a directory that specifically stores Python modules, and move dope inside it.
As a side note: there is %env magic command in Colab (and underlying Jupyter Notebook and IPython), which allows users to modify environment variables. But there is a famous trap; editing PYTHONPATH with %env has no effect on the Python interpreter that is already running, so you will still get ImportError!
Related
I am wondering why Python does not find modules in the same folder when code is run in a cell. I use Spyder and when I run a cell
#%% import cell
from gym import Env
from gym.spaces import Discrete, Box, Tuple
import numpy as np
import random
import pandas as pd
import Run_Simulations
import SetUpScenarios
I get the error messages that the modules Run_Simulations and SetUpScenarios are not found altough they are in the same folder. Strangely when not running the single cell but the whole code (where there are no other import instructions or anything similar), the modules can be found and are correctly imported.
I recently cloned a github repository (https://github.com/lpbsscientist/YeaZ-GUI.git) to begin understanding the code and making edits to it. I have set up a conda virtual environment with all the dependencies required and have been running GUI_main.py successfully via the command line.
In an attempt to gain a better understanding of what the script does, I opened it in Atom and began making edits (not saving them). I inserted several print statements and additional comments. With the path to the virtual environment saved as profile on the Atom script package, I was able to successfully run GUI_main.py from Atom.
So far, so good.
I then saved the current version of GUI_main.py. I tried to run it again, but this time, I got the following error:
File "GUI_main.py", line 55, in <module> import neural_network as nn ModuleNotFoundError: No module named 'neural_network
I got the same error message running the new saved version of GUI_main.py from the command line, atom, and the python 3.6.8 IDE.
I have made no edits to the file structure in GitHub repo. GUI_main.py is located in the directory YeaZ-GUI. This script imports several modules in the subdirectory YeaZ-GUI/unet, and the script appends their paths correctly (as far as I know - see initialization commands below). GUI_main.py imports numerous modules in the following important statement:
#!/usr/bin/env python3
import sys
import numpy as np
import pandas as pd
import h5py
import skimage
# For writing excel files
#from openpyxl import load_workbook
#from openpyxl import Workbook
# Import everything for the Graphical User Interface from the PyQt5 library.
from PyQt5.QtWidgets import (QApplication, QMainWindow, QDialog, QSpinBox,
QMessageBox, QPushButton, QCheckBox, QAction, QStatusBar, QLabel)
from PyQt5 import QtGui
#Import from matplotlib to use it to display the pictures and masks.
from matplotlib.backends.qt_compat import QtWidgets
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
from sklearn.decomposition import PCA
import imageio
from PIL import Image, ImageDraw
#append all the paths where the modules are stored. Such that this script
#looks into all of these folders when importing modules.
sys.path.append("./unet")
sys.path.append("./disk")
sys.path.append("./icons")
sys.path.append("./init")
sys.path.append("./misc")
from PlotCanvas import PlotCanvas
import Extract as extr
from image_loader import load_image
from segment import segment
import neural_network as nn #this is the statement causing the error
Curious as to what was going on, I deleted the current version of GUI_main.py and redownloaded the GUI_main.py script from the Github repo mentioned above and placed it into the same location it was before.
This time, I made the same edits via the python IDE, and I was then able to successfully run the script from the command line with no errors.
However, After opening this edited version of the file in Atom and saving it (making no edits), I got the exact same error message as before - after running from Atom as well as the command line.
Interestingly, repeating this same procedure for the neural_network.py module as well as other modules (i.e editing it, saving it in Atom, and then running it) gives no error message.
It seems like the problem is resulting from the action of saving GUI_main.py on Atom? Could this be possible? Please offer any insight.
So, I did some more experimenting. Turns out that I had downloaded an auto-formatting package (auto pep8) through Atom that was changing the order of my import statements upon saving.
The package's auto-formating moved the import neural_networks as nn to a position before the sys.path.append statements and thus the subdirectory unet where neural_network.py is located was not being searched for importable modules.
I am working on a python program in colab.
I need to import another file here. The file is saved by the name "base_positioner.ipynb" in google drive....
I have gone through multiple resources to see how to do this import and I have done the following:
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/My Drive
On running !ls , I see 'base_positioner.ipynb' in the list but
still running : import base_positioner throws the module not found error
I had also tried the following but with no success in importing the desired file:
sys.path.append('/content/gdrive/My Drive/Colab Notebooks')
What else should I try??
This could happen if you haven't mounted your Drive on Colab to the backend properly and also possibly if your file layout in Drive is distinct from the file layout in Colab. Are you running the import command without running the following code?
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/My Drive
If you are doing that then this won't work, as this is a pre-requisite for the mounting to take place (i.e. not running the cells sequentially). You can also try restarting Google Colab and this often fixes any strange errors.
Update:
As you mentioned, the import error likely happens due to its configuration in the main file (i.e. it requires the file to be in the .py format to be imported just as import base_positioner).
To import .ipynb extension file you will need to follow the following process:
If you want to import A.ipynb in B.ipynb write
import import_ipynb
import A
The import_ipynb module can be installed via pip or any other relevant ways.
pip install import_ipynb
I'm having a challenging time getting the Python azure-cosmos library to correctly load for the purposes of locally testing a function in VS Code.
The specific error I'm getting (with the file path shortened) is: Exception: ImportError: cannot import name 'exceptions' from 'azure.cosmos' ([shortened]/.venv/lib/python3.8/site-packages/azure/cosmos/__init__.py)
Things I've checked/tried so far:
Check that requirements.txt specifies azure-cosmos
Manually go into python for each of the interpreters available within VS code and ensure I can manually import azure.cosmos
As instructed here, attempt to reinstall the azure-cosmos library using pip3 and ensuring the --pre flag is used.
[Updated] Verified I can successfully import azure.cosmos.cosmos_client as cosmos_client without any errors
Any ideas? Thanks! Below is the relevant section of my code.
import datetime
import logging
import tempfile
import requests
import os
import zipfile
import pandas as pd
import azure.functions as func
from azure.cosmos import exceptions, CosmosClient, PartitionKey
def main(mytimer: func.TimerRequest, calendars: func.Out[func.Document]) -> None:
logging.info("Timer function has initiated.")
This is what you face now:
This is the offcial doc:
https://github.com/Azure-Samples/azure-cosmos-db-python-getting-started
This doc tells you how to solve this problem.
So the solution is to install pre version.(George Chen's solution is right.)
Didn't install the pre version is the root reason, but please notice that, you need to first delete the package. Otherwise, the pre version will not be installed.(Only run install pre will not solve this problem, you need to delete all of the related packages first. And then install the pre package.)
Whether azure.cosmos is needed depends on whether function binding meets your needs, if the binding could do what you want suppose you don't need to use azure.cosmos.
About this import error, I could reproduce this exception, and I check the github solution it have to add a --pre flag.
So my solution is go to task.json under .vscde, add the flag to the command like below.
If you want to get more details about cosmos binding you could refer to this doc:Azure Cosmos DB trigger and bindings
from datasets import dataset_utils ImportError: No module named datasets.
when i am writing this in python sript.
import tensorflow as tf
from datasets import dataset_utils
slim = tf.contrib.slim
But i am getting error.
from datasets import dataset_utils
ImportError: No module named datasets
I found this solution
How can jupyter access a new tensorflow module installed in the right path?
I did the same and i have dataset packages at path anaconda/lib/python2.7/site-packages/. Still i am getting same error.
pip install datasets
I solved it this way.
You can find the folder address on your device and append it to system path.
import sys
sys.path.append(r"D:\Python35\models\slim\datasets"); import dataset_utils
You'll need to do the same with 'nets' and 'preprocessing'
sys.path.append(r"D:\Python35\models\slim\nets"); import vgg
sys.path.append(r"D:\Python35\models\slim\preprocessing"); import vgg_preprocessing
Datasets is present in https://github.com/tensorflow/models/tree/master/slim/datasets
Since 'models' are not installable from pip (at the time of writing), they are not available in python load paths by default. So either we copy them or manually add to the path.
Here is how I setup env before running the code:
# git clone or wget
wget https://github.com/tensorflow/models/archive/master.zip -O models.zip
unzip models.zip
# add it to Python PATH
export PYTHONPATH=$PYTHONPATH:$PWD/models-master/slim
# now we are good to call `python mytensorflow.py`
It's using the datasets package in the TF-slim image models library, which is in:
git clone https://github.com/tensorflow/models/
Having done that though, in order to import the module as shown in the example on the slim image page, empty init.py have to be added to the models and models/slim directories.
go to https://github.com/nschaetti/EchoTorch/releases and download the latest release
install the latest release from the downloaded file (202006291 is the latest version at the moment):
$pip install ./EchoTorch-202006291.zip
test it out using narma10_esn.py (other examples may have some issues)
you may still need to install some more python packages not listed in the requirements file but it works once you do this.