I am trying to ran the following code, and I get an AttributeError: 'module' object has no attribute 'hcluster', raised in the last line.
I am running in Mountain Lion, I use pip and homebrew, and hcluster is in PYTHONPATH=/usr/local/lib/python2.7/site-packages.
Any idea what can be going wrong? Thanks.
import os
import hcluster
from numpy import *
from PIL import Image
# create a list of images
path = 'data/flickr-sunsets-small'
imlist = [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.jpg')]
# extract feature vector (8 bins per color channel)
features = zeros([len(imlist), 512])
for i,f in enumerate(imlist):
im = array(Image.open(f))
# multi-dimensional histogram
h,edges = histogramdd(im.reshape(-1,3),8,normed=True,range=[(0,255),(0,255),(0,255)])
features[i] = h.flatten()
tree = hcluster.hcluster(features)
This error means that Python cannot find the function/class hcluster in the
module hcluster, so when you do tree = hcluster.hcluster(features) it complains.
I'm not familiar with this module, but I had a quick look at this it, and it lists a function called fcluster, but no hcluster.
Related
import os
import cv2
import pickle
from sklearn.cluster import KMeans
import numpy as np
train_path = './Train/'
class_list = os.listdir(train_path)
for i in range(len(class_list)):
image_list = os.listdir(os.path.join(train_path, class_list[i]))
for j in range(len(image_list)):
image = cv2.imread(os.path.join(train_path, class_list[i], image_list[j]))
sift = cv2.SIFT_create()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
(kp, descs) = sift.detectAndCompute(gray, None)
descs_samples = descs[np.random.randint(descs.shape[0], size=20)]
I am trying to get sift features of 43 diffrent classes of images but for some reason when i try to use the command descs_samples = descs[np.random.randint(descs.shape[0], size=20)] function i am getting this eror * AttributeError: 'NoneType' object has no attribute 'shape'*
My friend was able to runs this code correctly but for some reason i can't.
I tried changing my files loaction and printing images to make sure they were actullay are getting read.I was able to see my images with print (image).
My friend wasn't using the same dataset so i tought there might be something wrong with it.After a bit of debugging i found that i wasn't able to get descropitores from some of the train images.So i isolated them. I lost around %10 of the dataset but now it works great. I am trying to figure out why i couldn't get descriptores from this %10.
This is the code i'm trying to run on an virtual environment in python3.6. I use ubuntu newest version 17.10, I run the code as python3 gather_annotations.py
import numpy as np
import cv2
import argparse
from imutils.paths import list_images
from selectors import BoxSelector
#parse arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d","--dataset",required=True,help="path to images dataset...")
ap.add_argument("-a","--annotations",required=True,help="path to save annotations...")
ap.add_argument("-i","--images",required=True,help="path to save images")
args = vars(ap.parse_args())
#annotations and image paths
annotations = []
imPaths = []
#loop through each image and collect annotations
for imagePath in list_images(args["dataset"]):
#load image and create a BoxSelector instance
image = cv2.imread(imagePath)
bs = BoxSelector(image,"Image")
cv2.imshow("Image",image)
cv2.waitKey(0)
#order the points suitable for the Object detector
pt1,pt2 = bs.roiPts
(x,y,xb,yb) = [pt1[0],pt1[1],pt2[0],pt2[1]]
annotations.append([int(x),int(y),int(xb),int(yb)])
imPaths.append(imagePath)
#save annotations and image paths to disk
annotations = np.array(annotations)
imPaths = np.array(imPaths,dtype="unicode")
np.save(args["annotations"],annotations)
np.save(args["images"],imPaths)
And I get the following errors
I have this Folder named '2' where I have all the scripts and other folder named selectors where there is 2 scripts init and box_selector
2(folder)
----selectors/
------------_ init _.py
------------box_selector.py
----detector.py
----gather_annotations.py
----test.py
----train.py
How can I fix that, in the post where I got the code from says something about 'relative imports' but i couln't fix it, thank you.
You need to use . notation to access a file inside a folder..
so
from folder.python_file import ClassOrMethod
in your case
from selectors.box_selector import BoxSelector
Having__init__.py in the selectors folder is crucial to making this work.
You can as many folders as you like and can access as follows but each folder has to contain an __init__.py to work
from folder.folder1.folder2.python_file import ClassOrMethod
One possible area of confusion is that there is a different python library called "selectors" which is DIFFERENT from the selectors of this code example.
https://docs.python.org/3/library/selectors.html
I ended up rename "selectors" (including the directory) of this example to "boxselectors"
This example is from http://www.hackevolve.com/create-your-own-object-detector/
l m following a book and l type the example code but when l run it, it gave these errors.l m using Enthought Canopy along with all necessary packages. how can l solve this problem? l do not want to use another package as there are some other steps l need to use ogr.On Enthought Canopy ,l updated ogr but it did not help.
ERROR 6: Unable to load PROJ.4 library (proj.dll), creation of
OGRCoordinateTransformation failed.
here is example code:
from __future__ import print_function
import ogr
import osr
def open_shape_file(file_path):
#Open the shapefile, get the first layer and returns
#the ogr datasource.
datasource=ogr.Open(file_path)
layer=datasource.GetLayerByIndex(0)
print ("opening {}".format(file_path))
print ("Number of feature:{}".format(layer.GetFeatureCount()))
return datasource
def transform_geometries(datasource, src_epsg, dst_epsg):
#Transform the coordinates of all geometries in the
#first layer.
# Part 1
src_srs = osr.SpatialReference()
src_srs.ImportFromEPSG(src_epsg)
dst_srs = osr.SpatialReference()
dst_srs.ImportFromEPSG(dst_epsg)
transformation = osr.CoordinateTransformation(src_srs, dst_srs)
layer = datasource.GetLayerByIndex(0)
# Part 2
geoms = []
layer.ResetReading()
for feature in layer:
geom = feature.GetGeometryRef().Clone()
geom.Transform(transformation)
geoms.append(geom)
return geoms
datasource=open_shape_file("D:/python/python_geospe/exampledata/TM_WORLD_BORDERS/TM_WORLD_BORDERS-0.3.shp")
layer = datasource.GetLayerByIndex(0)
feature = layer.GetFeature(0)
print("Before transformation:")
print(feature.GetGeometryRef())
transformed_geoms = transform_geometries(datasource, 4326, 3395)
print("After transformation:")
print(transformed_geoms[0])
open_shape_file("D:/python/python_geospe/exampledata/TM_WORLD_BORDERS/TM_WORLD_BORDERS-0.3.shp")
Did you set your environment variables correctly ? proj.dll is typically located in C:\Program Files (x86)\GDAL. You need to set an environment variable with this path.
I suggest that you follow this installation guide which explains the process of correctly installing GDAL/OGR in a Windows OS.
Another guide: here.
I am new to python or more specifically ipython. I have been running through the steps to run what should be a very simple Dicom Conversion in a statistical package called SPM for an MRI image file as described by NiPype. I can't get it to run and was wondering what I was doing wrong. I am not getting an error message, instead, there is no file change or output. It just hangs. Does anyone have any idea what I might be doing wrong? It's likely that I am missing something very simple here (sorry :(
import os
from pylab import *
from glob import glob
from nipype.interfaces.matlab import MatlabCommand as mlab
mlab.set_default_paths('/home/orkney_01/s1252042/matlab/spm8')
from nipype.interfaces.spm.utils import DicomImport as di
os.chdir('/sdata/images/projects/ASD_MM/1/datafiles/restingstate_files')
filename = "reststate_directories.txt"
restingstate_files_list = [line.strip() for line in open(filename)]
for x in restingstate_files_list:
os.chdir( x )
y = glob('*.dcm')
conversion = di(in_files = y))
print(res.outputs)
You are creating a DicomImport interface, but you are not actually running it. You should have res = di.run().
Also, you are best to tell the interface where to run using di.base_dir = '/some/path' before running.
Finally, you may also want to print the contents of restingstate_files_list to check you are finding the DICOM directories correctly.
So this is a really weird problem I've been getting. I'm basically trying to create a practice codebook which uses SIFT features of images that are clustered by the kmeans algorithm in Python. However whenever I run the code I get the following error
Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\Python\assignment2\SIFT_Dectection.py", line 34, in <module>
codebook, dis = cluster.vq.kmeans(codebook_construction(files[:20]),3)
File "C:\Python27\lib\site-packages\scipy\cluster\vq.py", line 513, in kmeans
No = obs.shape[0]
AttributeError: 'list' object has no attribute 'shape'
I assume that this is an error within the vq script for the Scipy library. However, I have other friends who are working on this as well and I am using the exact same code as them with the scipy library but I'm still getting this problem. I've also tried to completely uninstall Python reinstalling everything. I'm running the thing on Windows 7 btw. The code I'm using looks something like this:
import cv2
import glob
from scipy import cluster
files = glob.glob('101_ObjectCategories/*/*.jpg')
def codebook_construction(im):
codebook = []
for image in im:
img = cv2.imread(image)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
sift = cv2.SIFT()
kp, desc = sift.detectAndCompute(gray, None)
if codebook == []:
codebook = desc
else:
codebook = np.vstack((codebook, desc))
return codebook
codebook, dis = cluster.vq.kmeans(codebook_construction(files[:20]),3)
The glob function there calls for a library of images I've downloaded from Caltech. I've searched high and low for an answer but it seems that no one has been having similar problems. Hopefully I can get some guidance here
The issue looks to be that kmeans is expecting an array, and you're feeding it a list. Try changing the last line of your codebook_construction() function to:
return scipy.array(codebook)