KeyError: 'max_overlaps' on tensorflow ver Faster R-CNN - python

I tried to run python Faster-RCNN based on tensorflow, which clone from https://github.com/smallcorgi/Faster-RCNN_TF
I built a dataset by myself and re-write data API to make dataset fit. The images in the dataset are all composed of meaningless background and text.
I got a .txt file to record the text location in a image, such as
ID_card/train/3.jpg 1 209 39 261 89
And my goal is to find text from a new image.
But when I run
python ./tools/train_net.py --device cpu --device_id 1 --solver VGG_CNN_M_1024 --weight ./data/pretrain_model/VGG_imagenet.npy --imdb ID_card_train --network IDcard_train
I got this KeyError: 'max_overlaps'
and here is the terminal record and error traceback.
Traceback (most recent call last):
File "./tools/train_net.py", line 97, in <module>
max_iters=args.max_iters)
File"/Users/jay_fu/tasks/catpatch/ClickCatpatch/tools/../lib/fast_rcnn/train.py", line 259, in train_net
roidb = filter_roidb(roidb)
File"/Users/jay_fu/tasks/catpatch/ClickCatpatch/tools/../lib/fast_rcnn/train.py", line 250, in filter_roidb
filtered_roidb = [entry for entry in roidb if is_valid(entry)]
File"/Users/jay_fu/tasks/catpatch/ClickCatpatch/tools/../lib/fast_rcnn/train.py", line 239, in is_valid
overlaps = entry['max_overlaps']
KeyError: 'max_overlaps'
I did googled and tried to delete /cache folder, and it didn't work. Next time i run the code the folder and .pkl file would be created again and then the same error came out.
some other answer said delete another folder lib/datasets/VOCdevkit-matlab-wrapper
however smallcorgi/Faster-RCNN_TF do not contain this folder, so i have no way to go.
I wonder what happened to my code and what would cause this error. I have no idea about what to do.
Anyone can give me some help, solution or whatever a piece of idea?
edit:
I run the demo on #VegardKT 's idea, the demo works good.
terminal shows succeed and figure 1-5 shows up.

Related

Why is np.load not able to load a file saved using np.save() without allow_pickle=True

I am trying to save 2d array as npy file using np.save(). It saves without any error and without using any pickling.But when i load the file , I am getting the following error traceback as:
Traceback (most recent call last):
File "final_model.py", line 144, in <module>
a=create_bags('abnormal')
File "final_model.py", line 130, in create_bags
video_feature=np.load(DATASET_ROOT+'train/features/'+flag+'/'+file)
File "/home/aditya_vartak_quantiphi_com/anaconda3/envs/v/lib/python3.8/site-packages/numpy/lib/npyio.py", line 452, in load
return format.read_array(fid, allow_pickle=allow_pickle,
File "/home/aditya_vartak_quantiphi_com/anaconda3/envs/v/lib/python3.8/site-packages/numpy/lib/format.py", line 739, in read_array
raise ValueError("Object arrays cannot be loaded when "
ValueError: Object arrays cannot be loaded when allow_pickle=False
I researched about it on internet , only to find one statement saying something like,
Array might not be loaded properly , that's why np.load is considering it as object array
So what i did is made a test code for that file(lets call it error_file) as follows:
f_b=feature_extractor(file_loc)
np.save(target_path,f_b)
feature=np.load(target_path)
print(feature, feature.shape)
This gives results as expected without throwing error
but when i use it inside a function which takes in all features npy files one by one and loads it to print its contents, the execution of function stops at the exact point where it encounters
error_file with the traceback as above
The test code should have replaced the wrongly formed feature npy with right one, but the error dont seem with the file , but with np.save itself.
Addendum: The Function works for all npy files before it, even though they all went through same procedure during feature formation
Any help would be great

Is there any limits of saving result on S3 from sagemaker Processing?

※ I used google translation, if you have any question, let me know!
I am trying to run python script with huge 4 data, using sagemaker processing. And my current situation are as follows:
can run this script with 3 data
can't run the script with only 1 data (the biggest, the same structure with others)
as for all of 4 data, the script has finished (so, I suspected this error in S3, ie. when copying sagemaker result to S3)
The error I got is this InternalServerError.
Traceback (most recent call last):
File "sagemaker_train_and_predict.py", line 56, in <module>
outputs=outputs
File "{xxx}/sagemaker_constructor.py", line 39, in run
outputs=outputs
File "{masked}/.pyenv/versions/3.6.8/lib/python3.6/site-packages/sagemaker/processing.py", line 408, in run
self.latest_job.wait(logs=logs)
File "{masked}/.pyenv/versions/3.6.8/lib/python3.6/site-packages/sagemaker/processing.py", line 723, in wait
self.sagemaker_session.logs_for_processing_job(self.job_name, wait=True)
File "{masked}/.pyenv/versions/3.6.8/lib/python3.6/site-packages/sagemaker/session.py", line 3111, in logs_for_processing_job
self._check_job_status(job_name, description, "ProcessingJobStatus")
File "{masked}/.pyenv/versions/3.6.8/lib/python3.6/site-packages/sagemaker/session.py", line 2615, in _check_job_status
actual_status=status,
sagemaker.exceptions.UnexpectedStatusException: Error for Processing job sagemaker-vm-train-and-predict-2020-04-12-04-15-40-655: Failed. Reason: InternalServerError: We encountered an internal error. Please try again.
There may be some issue transferring the output data to S3 if the output is generated at a high rate and size is too large.
You can 1) try to slow down writing the output a bit or 2) call S3 from your algorithm container to upload the output directly using boto client (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html).

cannot load mat file into python with scipy.io or hdf5storage

I have tried several ways to load my .mat file into python. I eventually want the structure in the mat file to be a numpy array. I am not sure how best to post this question, because I think I might need to upload my .mat file, as it seems there is a problem with that since the steps I am trying seemed to work for everyone else.
First, I tried:
import scipy.io as sio
mat_contents = sio.loadmat('filename.mat')
Which gave the same error message (listed below) as when I installed hdf5storage and h5py. I have matlab version 9.3. and python 3.5.3.
This also gave the same error message as below:
import hdf5storage
mat = hdf5storage.loadmat('filename.mat')
The error from both those tries is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/hdf5storage/__init__.py", line 1801, in loadmat
**keywords)
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/scipy/io/matlab/mio.py", line 135, in loadmat
MR = mat_reader_factory(file_name, appendmat, **kwargs)
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/scipy/io/matlab/mio.py", line 59, in mat_reader_factory
mjv, mnv = get_matfile_version(byte_stream)
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/scipy/io/matlab/miobase.py", line 235, in get_matfile_version
maj_ind = int(tst_str[2] == b'I'[0])
IndexError: index out of range
>>>
My .mat file contains a structure 1x1 which has several fields of different sizes. I am mostly a python person, and am only using matlab to output files which I intend to analyze in python.
#hpaulj thanks, your comment made me reload the file as I think it was somehow corrupted . I cannot trace what happened to it but the solution to this question was to check the file. The steps listed above in the question are correct. (I'm new to Stackoverflow and I'm pretty sure you cannot accept a comment as an answer so hopefully you will get credit here because I tagged your name?)

TypeError when using MoviePy

In trying to learn a little about MoviePy, I copied some sample code (which I modified slightly) that cuts a 10 second section out of a movie file, overlays text on it, and writes it as a different file. The code works perfectly...only for certain files. I have two video files that I wanted to use the code on (just for practice). Both are .mov files, both are on the same drive and both of the paths are correct (I have verified them multiple times). The problem is I'm getting a TypeError on one of the files while it works perfectly on the other. Here's the code:
from moviepy.editor import *
x = int(input("When do you want the cut to start? "))
y = int(input("When do you want the cut to end? "))
video = VideoFileClip("D:\Videos\Gatlinburgh Drone River 2.MOV").subclip(x,y)
##video = VideoFileClip("D:\SF_ep\T_R_D.mov").subclip(x,y) #Path is correct
txt_clip = ( TextClip("The Red Dot episode",fontsize=70,color='white')
.set_position('center')
.set_duration(10) )
result = CompositeVideoClip([video, txt_clip])
result.write_videofile("Text on Screen.webm",fps=25)
The above example works perfectly. However, when I comment it out and uncomment the video right below it, I get the following error:
Traceback (most recent call last):
File "C:\Users\Sam\Python Projects\MoviePy\Example3c.py", line 15, in <module>
video = VideoFileClip("D:\\Seinfeld_All_Episodes\\The_Red_Dot.mov").subclip(x,y)
File "C:\Python34\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 82, in __init__
nbytes = audio_nbytes)
File "C:\Python34\lib\site-packages\moviepy\audio\io\AudioFileClip.py", line 63, in __init__
buffersize=buffersize)
File "C:\Python34\lib\site-packages\moviepy\audio\io\readers.py", line 70, in __init__
self.buffer_around(1)
File "C:\Python34\lib\site-packages\moviepy\audio\io\readers.py", line 234, in buffer_around
self.buffer = self.read_chunk(self.buffersize)
File "C:\Python34\lib\site-packages\moviepy\audio\io\readers.py", line 123, in read_chunk
self.nchannels))
TypeError: 'float' object cannot be interpreted as an integer
I'm not changing any code, I'm just pointing to a different file. I've tried the same with different files and gotten the same error. Why would it work on one and not the other? Any thoughts?
A similar question has been asked Stack Overflow before but there weren't any solid answers (at least none that applied to my particular situation).
Any help would be great. Thanks!
After searching around a bit more, I found a solution here. Line 122 of code in Readers.py was returning a float instead of an integer because it was using a single "/" instead of the double "//". I changed that line and it seems to have solved the problem. Details are at the link.
For the record, I still don't understand why it happened on certain files and not others. Nevertheless, the fix was simple.

Sound files in PsychoPy wont load

I'm currently working on building an experiment in PsychoPy (v1.82.01 stand-alone). I started on the project several months ago with an older version of PsychoPy.
It worked great and I ran some pilot subjects. We have since adjusted the stimuli sounds and it won’t run.
It looks like there is an issue with referencing the sound file, but I can’t figure out what’s going on.
I recreated the first part of the experiment with a single file rather than a loop so that it would be easier to debug. The sound file is referenced using:
study_sound = sound.Sound(u‘2001-1.ogg’, secs=-1)
When I run it, I get this output:
or see below
Running: /Users/dkbjornn/Desktop/Test/test.py
2016-04-29 14:05:43.164 python[65267:66229207] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/9f/3kr6zwgd7rz95bcsfw41ynw40000gp/T/org.psychopy.PsychoPy2.savedState
0.3022 WARNING Movie2 stim could not be imported and won't be available
sndinfo: failed to open the file.
Traceback (most recent call last):
File "/Users/dkbjornn/Desktop/Test/test.py", line 84, in <module>
study_sound = sound.Sound(u'2001-1.ogg', secs=-1)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/sound.py", line 380, in __init__
self.setSound(value=value, secs=secs, octave=octave, hamming=hamming)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/sound.py", line 148, in setSound
self._setSndFromFile(value)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/sound.py", line 472, in _setSndFromFile
start=self.startTime, stop=self.stopTime)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/pyolib/tables.py", line 1420, in setSound
saved data to u'/Users/dkbjornn/Desktop/Test/data/99_test_2016_Apr_29_1405_1.csv'
_size, _dur, _snd_sr, _snd_chnls, _format, _type = sndinfo(path)
TypeError: 'NoneType' object is not iterable
The important thing here is the sndinfo: failed to open the file. message. Most likely, it cannot find your file on the disk. Check the following:
Is the file 2001-1.ogg in the same folder as your experiment? Not in a subfolder? Or have you accidentially changed your path, e.g. using os.chdir?
Is it actually called 2001-1.ogg? Any differences in uppercase/lowercase, spaces, etc. all count.
Alternatively, there's something in the particular way the .ogg was saved that causes the problem, even though the Sound class can read a large set of different sound codecs. Try exporting the sound file in other formats, e.g. .mp3 or .wav.

Categories

Resources