I'm trying to implement a model of Object Detecting using Turi Create.
For simplicity I just use 1 photo which is locate in my Desktop, inside a folder CondaMLProject.
Using the software Label Studio I place the label on this photo and export the csv file annotation
I notice that the csv file is like:
as you can see the image column report a wrong link to the photo, is not my desktop link.
when I run the python script of turi create to join the the image to the annotation I'm getting error
import turicreate as tc
path = "Spikosmall"
pathAnnotation = "Spikosmall/annotation.csv"
images = tc.load_images(path, with_path=True)
annotation = tc.SFrame(pathAnnotation)
data = images.join(annotation)
Error:
Columns image and image do not have the same type in both SFrames
How can I solve this issue?
Not so smart in python .. looking for some code to iterate inside the column image and change the link to match the photo folder..
Is there any other solution?
Related
My goal is to detect tables in images and transform such tables into CSV files.
I am using CascadeTabNet to detect the location of tables.
Their example works great (https://colab.research.google.com/drive/1lzjbBQsF4X2C2WZhxBJz0wFEQor7F-fv?usp=sharing), the problem is that it does not show at the end how to save the detected table as another image (nor how to actually make that table into a CSV file, but for that, I can use other code).
This is what they had:
from mmdet.apis import init_detector, inference_detector, show_result_pyplot
import mmcv
# Load model
config_file = '/content/CascadeTabNet/Config/cascade_mask_rcnn_hrnetv2p_w32_20e.py'
checkpoint_file = '/content/epoch_36.pth'
model = init_detector(config_file, checkpoint_file, device='cuda:0')
# Test a single image
img = "/content/CascadeTabNet/Demo/demo.png"
# Run Inference
result = inference_detector(model, img)
# Visualization results
show_result_pyplot(img, result,('Bordered', 'cell', 'Borderless'), score_thr=0.85)
I tried to add the following to their code:
model.show_result(img, result, out_file='result.jpg')
And I get the following error:
TypeError: show_result() got an unexpected keyword argument 'out_file'
I also tried adding the "out_file" inside the "show_result_pyplot":
show_result_pyplot(img, result,('Bordered', 'cell', 'Borderless'), score_thr=0.85, out_file='result.jpg')
And I get the same error: show_result() got an unexpected keyword argument 'out_file'
Finally, I tried saving the image using pyplot from matplotlib, but it did not save the cropped area (the table area of the page) but all of it:
plt.savefig('foo.png')
I have been looking for solutions (How to crop an image in OpenCV using Python, How to detect symbols on a image and save it?, How to save cropped image using openCV?, How to save the detected object into a new image ...), and they all include cv2, but that did not work for me either.
In short, how to crop the area of the table and save it as a new image after using "inference_detector()"?
I'm trying to create an interactive pyLDAvis visualization for an LDA model I have built, and although I am able to create a static visualization, I am struggling with the following:
Outputting a dynamic visual that interacts with the 'relevance metric' slider on the top right hand corner of the screen. The words per topic, the saliency measures, etc. do not change when I adjust the relevance metric.
After being unable to display an interactive visual through a saved html file and on a webpage, I tried to display the visual on Spyder. I tried using the following commands:
In: pyLDAvis.display(vis_data)
Out: <IPython.core.display.HTML object>
and
In: pyLDAvis.show(vis_data)
Out: FileNotFoundError: [Errno 2] No such file or directory: 'https://cdn.jsdelivr.net/gh/bmabey/pyLDAvis#3.3.1/pyLDAvis/js/ldavis.v1.0.0.css'
Below is the code that I have used to create the model and the plot. I'd appreciate it if I could get help in creating an interactive html file or in displaying the result in an IDE (preferably Spyder). My python version is 3.8.3. Thanks!
# Calling File Cleanser on a file
fileTextTokenized = FileCleanser(mahabharatatext)
# Converting each word into a word-count dictionary format
dictionary = Dictionary(fileTextTokenized)
dictionary.filter_extremes(no_below = 100, no_above = 0.8)
gensim_corpus = [dictionary.doc2bow(word) for word in fileTextTokenized]
temp = dictionary[0]
id2word = dictionary.id2token
# Setting model parameters
chunksize = 2000
passes = 20
iterations = 400
num_topics = 6
lda_model = LdaModel(
corpus=gensim_corpus,
id2word=id2word,
chunksize=chunksize,
alpha='auto',
eta='auto',
iterations=iterations,
num_topics=num_topics,
passes=passes
)
vis_data = gensimvis.prepare(lda_model, gensim_corpus, dictionary)
vis_data
pyLDAvis.display(vis_data)
pyLDAvis.save_html(vis_data, './FileModel'+ str(num_topics) +'.html')
I am also trying to create an LDA vizualization and it works for me with my data. I just used your last 2 pieces of code (adding sort_topics=False) :
vis_data = pyLDAvis.gensim_models.prepare(model, corpus, dictionary, sort_topics=False)
pyLDAvis.save_html(vis_data, 'path/doc_name.html')
Once you have saved your model you can click on it and it will open in your browser and you can change the topic and the lambda value (I am using chrome).
I have created a definition that loads every image from a called folder. I am now trying to create a code that will either 1) load a specific image when indexed, and/or 2) load an image at random. I have attached two screenshots of my code and the error I am receiving.
https://i.stack.imgur.com/nQKrV.png
https://i.stack.imgur.com/toXkI.png
It looks like you need to concatenate the directory path with the file name:
with open(os.path.join(rat110_GF_path, random_filename)) as file
lines = file.readlines()
I'm a beginner in Python and I was wondering why my "app" wont display my image but it'll display everything else fine. I'm using GUIzero. Image file is in the same folder as the code file.
Error: Traceback GUIZERO ERROR
Image import error - ‘couldn’t
recognize data in image file “lad.png”
Check the file path and image type is GIF/PNG
Here's my code ↓
from guizero import App, Text, Picture
app = App("Wanted!")
app.bg = "#FFFBB0"
wanted_text = Text(app, "WANTED")
wanted_text.text_size = 50
wanted_text.font = "Times New Roman"
cat = Picture(app, image="lad.png")
app.display()
Thanks!
I was facing the same issue and then I changed my Image file and downloaded a new png file from the internet and put it in the same folder in which my program was.. and boom magic happened and my program run well.
the first image was actually a jpg image and I converted it into png using an online converter I think it doesn't converted well into png.
I am having a problem with usdpython tool delivered by Apple.
I'm trying to convert a file.obj linked to a material.mtl file exported by Cinema4D. Inside material.mtl Cinema linked me each material with a specific texture inside the folder /Faces
This is my folder structure:
/Faces
/file.obj
/material.mtl
and this is the command i'm trying to lunch
usdzconvert file.obj -v
I'm also trying to add material references by -m flag but what I get is a 3d object without textures
Obj files dont have any materials or animation attached to it.Why dont you try converting an fbx file.
I tried the usdz tools for .obj files to change colours and add textures.I added a Green color into the material copied the file and gave command usdzconvert file.obj -m Green .The material did not have any effect on the usdz file.I think it is better to apply the texture via commands.
Example:- usdzconvert vase.obj -m bodyMaterial -diffuseColor body.png -opacity a body.png -metallic r metallicRoughness.png -roughness g metallicRoughness.png -normal normal.png -occlusion ao.png .
I tried the material part but did not work.If you have texture give them through commands.Refer this link a half an hour video