AttributeError: 'str' object has no attribute 'name' when using Streamlit - python

I've been trying to replicate the demo website from this repo using Streamlit.
But I'm stuck when I'm going to process the image with the model. The error message is AttributeError: 'str' object has no attribute 'name'. But in data.py or the code to read the image there is no 'name' attribute. Or I'm missing something here?
This is the snippet code
streamlitdemo.py
#st.cache()
def load_model():
gpu_ids=[]
model = create_model(gpu_ids)
model.eval()
return model
a = 'wave.jpg'
b = 'building.jpg'
c = 'test_samples/madoka.jpg'
def anime2sketch(img_input, load_size=512):
img, aus_resize = read_img_path(img_input.name, load_size)
model = load_model()
aus_tensor = model(img)
aus_img = tensor_to_img(aus_tensor)
image_pil = Image.fromarray(aus_img)
image_pil = image_pil.resize(aus_resize, Image.BICUBIC)
return image_pil
demo.py
.
.
.
def read_img_path(path, load_size):
"""read tensors from a given image path
Parameters:
path (str) -- input image path
load_size(int) -- the input size. If <= 0, don't resize
"""
img = Image.open(path).convert('RGB')
aus_resize = None
if load_size > 0:
aus_resize = img.size
transform = get_transform(load_size=load_size)
image = transform(img)
return image.unsqueeze(0), aus_resize
model.py
.
.
.
def create_model(gpu_ids=[]):
"""Create a model for anime2sketch
hardcoding the options for simplicity
"""
norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)
net = UnetGenerator(3, 1, 8, 64, norm_layer=norm_layer, use_dropout=False)
ckpt = torch.load('weights/netG.pth')
for key in list(ckpt.keys()):
if 'module.' in key:
ckpt[key.replace('module.', '')] = ckpt[key]
del ckpt[key]
net.load_state_dict(ckpt)
if len(gpu_ids) > 0:
assert(torch.cuda.is_available())
net.to(gpu_ids[0])
net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs
return net
But, when I'm hardcode the path with a/b/c variable, the model work properly. And I've already change read_img_path(img_input.name, load_size) to read_img_path(img_input, load_size) and I got FileNotFoundError: [Errno 2] No such file or directory: 'wave' error message.
This is the output when I'm hardcode the path
In that repo, the author already provide demo website but using Gradio. When I tried to run the demo code with Gradio it is work properly. I'm using the same code from the author, but I tweak it a little bit.
Thank you.

Related

AttributeError: 'Dataset' object has no attribute 'image_captions'. Dataset contains image captions though

my dataset.py is as follows:
Class Dataset:
something
#########################################################################
def __init__(self, annotation_file, transforms=None):
'''
Arguments:
annotation_file: path to the annotation file
transforms: list of transforms (class instances)
For instance, [<class 'RandomCrop'>, <class 'Rotate'>]
'''
self.annotation_file = annotation_file
self.transforms = transforms
**self.image_captions = {}**
with jsonlines.open(self.annotation_file) as reader:
for obj in reader:
self.image_captions[obj['image_id']] = obj['caption']
##########################################################################
something
In dataset.py, I have bolded out the line of code at which I have created an attribute image_captions.
and main.py is as follows:
def experiment(annotation_file, captioner, transforms, outputs):
'''
Function to perform the desired experiments
Arguments:
annotation_file: Path to annotation file
captioner: The image captioner
transforms: List of transformation classes
outputs: Path of the output folder to store the images
'''
#Create the instances of the dataset, download
ds = dataset.Dataset(annotation_file, transforms)
dl = download.Download()
#Print image names and their captions from annotation file using dataset object
**image_captions = ds.image_captions**
print("Image Captions:")
print(image_captions)
#Download images to ./data/imgs/ folder using download object
for name, url in image_captions.items():
dl(os.path.join('./data/imgs/', name), url)
def main():
captioner = ImageCaptioningModel()
experiment('./data/annotations.jsonl', captioner, [flip.FlipImage(), blur.BlurImage(1)], None) # Sample arguments to call experiment()
if __name__ == '__main__':
main()
In main.py
But it is giving me the error:
line 24, in experiment
image_captions = ds.image_captions
AttributeError: 'Dataset' object has no attribute 'image_captions'
Clearly it has the attribute 'image_captions'.I have bolded the lines to make it easier to see. I am doing a project on Image captioning using Lavis

Clicking on flask button runs python code and unpickles files with tokenizer

I am building a website with flask and when you click on a button I'm trying to run my machine learning code that is in a different .py file. But when I click on that button I get this error
AttributeError: Can't get attribute 'Tokenizer' on <module '__main__' from 'c:filepath'
I've been told it's because my Tokenizer class isn't able to unpickle the file. But I'm not sure why it's not able to because when I run my machine learning code on it's own it works fine. But when I try to click on the button through flask, that's when I get that error. Any help would be much appreciated
The function I'm trying to run is called starter("no") from a file called Music_Generator_2.py
app.py
#app.route('/generated')
def generated():
print("start")
Music_Generator_2.start("no") #from Music_Generator_2
print("sucess")
return render_template('index.html', tested_generator="generated")
The error occurs on the second line of this code
Music_Generator_2.py
model = tf.keras.models.load_model("model_25epochs.h5", custom_objects=SeqSelfAttention.get_custom_objects())
tokenizer = pickle.load(open("tokenizer25.p", "rb"))
#generate from random
max_generate = 200
unique_notes = tokenizer.unique_word
seq_len = 200
generate = generate_from_random(unique_notes, seq_len)
generate = generate_notes(generate, model, unique_notes, max_generate, seq_len)
write_midi_file(generate, tokenizer, "rand test.mid", start=seq_len - 1, fs=7, max_generate=max_generate)
#generate from a note
max_generate = 300
unique_notes = tokenizer.unique_word # same as above
seq_len = 300
generate = generate_from_one_note(tokenizer, "72")
generate = generate_notes(generate, model, unique_notes, max_generate, seq_len)
This is the code that I'm trying to in my machine learning program
Music_Generator_2.py
Tokenizer class
class Tokenizer:
def __init__(self):
self.notes_to_index = {}
self.index_to_notes = {}
self.num_word = 0
self.unique_word = 0
self.note_freq = {}
'''transform a list of notes from strings to indexes
list_array is a list of notes in string format'''
def transform(self, list_array):
transformed = []
for i in list_array:
transformed.append([self.notes_to_index[note] for note in i])
return np.array(transformed, dtype = np.int32)
'''partial fir on the dictionary of the tokenizer
notes is a list of notes'''
def partial_fit(self, notes):
for note in notes:
note_str = ",".join(str(n) for n in note)
if note_str in self.note_freq:
self.note_freq[note_str] += 1
self.num_word += 1
else:
self.note_freq[note_str] = 1
self.unique_word += 1
self.num_word += 1
self.notes_to_index[note_str] =self.unique_word
self.index_to_notes[self.unique_word] = note_str
'''add a new note to the dictionary
note is the new note to be added as a string'''
def add_new_note(self, note):
assert note not in self.notes_to_index
self.unique_word += 1
self.notes_to_index[note] = self.unique_word
self.index_to_notes[self.unique_word] = note
Solved: I moved my tokenizer class into it's own .py file and then I just imported that file for app.py and Mustic_Generator_2.py. I found the solution from here
This could be an issue of how you are running Flask. Are you running it inside of a virtualenv? If so, make sure that the correct pip packages are installed. I would make sure that the environment in which I run Flask is identical to the one where you run it on your own and it works.

AttributeError: 'decode' when reading TIFF images

Here is part of the code I am attempting to run:
import numpy as np
import os
import tensorflow as tf
import imageio
import sys
#Create tensorflowflow dataset
dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels))
if not is_test:
dataset = dataset.shuffle(num_of_samples)
dataset = dataset.repeat(None)
dataset = dataset.map(self._parse_dataset)
if not is_test:
batched_dataset = dataset.batch(self.batch_size, drop_remainder=True).prefetch(20)
else:
batched_dataset = dataset.batch(self.test_batch_size)
#Create the iterator
return batched_dataset, num_of_samples, path_strings
def get_batch(self, subset="train"):
batch_of_images = self.iterators[subset].get_next()
return batch_of_images
def _read_tif(self, file_path):
file_path = file_path.decode(sys.getdefaultencoding())
try :
im = imageio.imread(file_path)
except:
im = np.zeros((self.width, self.height, 3))
if len(im.shape) != 3:
im = np.repeat(im[:, :, np.newaxis], 3, axis=2)
return im
def _read_image(self, file_path):
return tf.py_function(func=self._read_tif, inp=[file_path], Tout=tf.uint8)
and I have the following error coming up:
File "C:\PROJECTS_RUNNING2\pipeline\data_loader\data_generator.py", line 131, in _read_tif
file_path = file_path.decode(sys.getdefaultencoding())
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'decode'
The file_path is defined in the run.py and looks like this:
def main(config_file_path):
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
config =tf.ConfigProto(gpu_options=gpu_options)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
tf.reset_default_graph()
config = parse_config_file(config_file_path)
#Create the experiment output folders, this is where the outputs will be saved
output_folder_path = config["output_path"]
output_path = create_output_folder(output_folder_path, config["experiment_name"])
copyfile(config_file_path, os.path.join(output_path, "%s_parameters.json" % config["experiment_name"]))
data_generator = DataGenerator(config)
Input and Output dataset file paths are correctly defined in the config file.
I a very much beginner in coding though have to use the script for analysis of my images and I am struggling to get it up and running. Im using Python 3.7 and Tensorflow 1.14. Any help to resolve this error will be really much appreciated!

Why does "tf.data.Dataset.from_tensor_slices" print all paths of images in output?

I'm writing a code to read images in Tensorflow. I use this tutorial to do that. The problem is when I'm using this command: tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels)), the whole paths of images showed up in the output console.
This is the code which I'm using:
def get_image_info(dir_path, file_url=None, file_name=None, untar=True):
if file_url != None:
dir_path = tf.keras.utils.get_file(fname=file_name, origin=file_url, untar=untar)
data_root = pathlib.Path(dir_path)
all_image_paths = list(data_root.glob('*/*'))
label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir())
label_dict = dict((name, index) for index, name in enumerate(label_names))
all_image_labels = [label_dict[pathlib.Path(path).parent.name] for path in all_image_paths]
return data_root, label_dict, all_image_paths, all_image_labels
def load_image_dataset(dir_path, file_url=None, file_name=None, untar=True):
def load_and_preprocess_from_path_label(path, label):
return load_and_preprocess_image(path), label
data_root, label_dict, all_image_paths, all_image_labels = get_image_info(dir_path, file_url, file_name, untar)
image_label_ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))
# image_label_ds = ds.map(load_and_preprocess_from_path_label)
return image_label_ds, label_dict
image_label_ds, label_dict= load_image_dataset('', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos')
and this is a section of my output:
, WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8838914676_8ef4db7f50_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8838975946_f54194894e_m.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8838983024_5c1a767878_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8892851067_79242a7362_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8904780994_8867d64155_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8908062479_449200a1b4.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/8908097235_c3e746d36e_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9019694597_2d3bbedb17.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9030467406_05e93ff171_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9048307967_40a164a459_m.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/924782410_94ed7913ca_m.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9378657435_89fabf13c9_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9444202147_405290415b_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9446982168_06c4d71da3_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9831362123_5aac525a99_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9870557734_88eb3b9e3b_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9947374414_fdf1d0861c_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9947385346_3a8cacea02_n.jpg'), WindowsPath('C:/Users/hajba/.keras/datasets/flower_photos/tulips/9976515506_d496c5e72c.jpg')]. Consider casting elements to a supported type.
For those who face this problem: this is a kind of error which Tensorflow try to show the image paths tensor in output. I use windows os and to solve this error I convert paths type (WindowsPath) to string with this line of code:
all_image_paths_str = list(map(lambda x: str(x), all_image_paths))
and then use that for generating output dataset tensor:
image_label_ds = tf.data.Dataset.from_tensor_slices((all_image_paths_str, all_image_labels))

Unable to convert os.path.split(imagePath)[-1].split('.')[1] to integer

I am trying to create a face recognition software using OpenCV, but the code I found in the library is made in Python 2. Is there a Python 3 version of this?
Here's the link: https://github.com/thecodacus/Face-Recognition
I already have a folder for dataset and trainer.
import cv2
import numpy as np
from PIL import Image
import os
# Path for face image database
path = 'dataset'
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");
# function to get the images and label data
def getImagesAndLabels(path):
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
faceSamples=[]
ids = []
for imagePath in imagePaths:
PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
img_numpy = np.array(PIL_img,'uint8')
id = int(os.path.split(imagePath)[-1].split('.')[1])
faces = detector.detectMultiScale(img_numpy)
for (x,y,w,h) in faces:
faceSamples.append(img_numpy[y:y+h,x:x+w])
ids.append(id)
return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))
Error:
Traceback (most recent call last):
File "/Users/user/Desktop/FacialRecognition/02_face_training.py", line 46, in <module>
faces,ids = getImagesAndLabels(path)
File "/Users/user/Desktop/FacialRecognition/02_face_training.py", line 36, in getImagesAndLabels
id = int(os.path.split(imagePath)[-1].split('.')[1])
ValueError: invalid literal for int() with base 10: 'User'
On that repository there's a dataSet directory, with a file named like:
In [665]: name='Face-Recognition/dataSet/face-1.1.jpg'
Applied to that name, your code sample does:
In [668]: os.path.split(name)
Out[668]: ('Face-Recognition/dataSet', 'face-1.1.jpg')
In [669]: os.path.split(name)[-1]
Out[669]: 'face-1.1.jpg'
In [670]: os.path.split(name)[-1].split('.')
Out[670]: ['face-1', '1', 'jpg']
In [671]: os.path.split(name)[-1].split('.')[1]
Out[671]: '1'
In [672]: int(os.path.split(name)[-1].split('.')[1])
Out[672]: 1
Apparently your file has a different name format, one that includes 'User' in a slot where this code expects a number.
You need to correct the file name, or change this parsing code.
The image name which you got in the dataset are as User.*somename*, so remove User from all the image names.
try to change format images is 'face.1.1.jpg'
then you can split the dot with this code
faceID = int(os.path.split(imagePath)[-1].split(".")[2])
This did the job for me:
Id=int(os.path.split(imagePath)[-1].split(".")[0])

Categories

Resources