Sound files in PsychoPy wont load - python

I'm currently working on building an experiment in PsychoPy (v1.82.01 stand-alone). I started on the project several months ago with an older version of PsychoPy.
It worked great and I ran some pilot subjects. We have since adjusted the stimuli sounds and it won’t run.
It looks like there is an issue with referencing the sound file, but I can’t figure out what’s going on.
I recreated the first part of the experiment with a single file rather than a loop so that it would be easier to debug. The sound file is referenced using:
study_sound = sound.Sound(u‘2001-1.ogg’, secs=-1)
When I run it, I get this output:
or see below
Running: /Users/dkbjornn/Desktop/Test/test.py
2016-04-29 14:05:43.164 python[65267:66229207] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/9f/3kr6zwgd7rz95bcsfw41ynw40000gp/T/org.psychopy.PsychoPy2.savedState
0.3022 WARNING Movie2 stim could not be imported and won't be available
sndinfo: failed to open the file.
Traceback (most recent call last):
File "/Users/dkbjornn/Desktop/Test/test.py", line 84, in <module>
study_sound = sound.Sound(u'2001-1.ogg', secs=-1)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/sound.py", line 380, in __init__
self.setSound(value=value, secs=secs, octave=octave, hamming=hamming)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/sound.py", line 148, in setSound
self._setSndFromFile(value)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/sound.py", line 472, in _setSndFromFile
start=self.startTime, stop=self.stopTime)
File "/Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/pyolib/tables.py", line 1420, in setSound
saved data to u'/Users/dkbjornn/Desktop/Test/data/99_test_2016_Apr_29_1405_1.csv'
_size, _dur, _snd_sr, _snd_chnls, _format, _type = sndinfo(path)
TypeError: 'NoneType' object is not iterable

The important thing here is the sndinfo: failed to open the file. message. Most likely, it cannot find your file on the disk. Check the following:
Is the file 2001-1.ogg in the same folder as your experiment? Not in a subfolder? Or have you accidentially changed your path, e.g. using os.chdir?
Is it actually called 2001-1.ogg? Any differences in uppercase/lowercase, spaces, etc. all count.
Alternatively, there's something in the particular way the .ogg was saved that causes the problem, even though the Sound class can read a large set of different sound codecs. Try exporting the sound file in other formats, e.g. .mp3 or .wav.

Related

Keepking numpy.load() safed in memory regardles of rerunning the code

I'm loading a huge file to process it:
file = numpy.load('path.txt')
... everytime I change a single line of code I need to reload the file which takes time. Is there a way to keep the file loaded in memory regardless of re-running the code? and how does loading the file using differernt libraries like panads may differ?

KeyError: 'max_overlaps' on tensorflow ver Faster R-CNN

I tried to run python Faster-RCNN based on tensorflow, which clone from https://github.com/smallcorgi/Faster-RCNN_TF
I built a dataset by myself and re-write data API to make dataset fit. The images in the dataset are all composed of meaningless background and text.
I got a .txt file to record the text location in a image, such as
ID_card/train/3.jpg 1 209 39 261 89
And my goal is to find text from a new image.
But when I run
python ./tools/train_net.py --device cpu --device_id 1 --solver VGG_CNN_M_1024 --weight ./data/pretrain_model/VGG_imagenet.npy --imdb ID_card_train --network IDcard_train
I got this KeyError: 'max_overlaps'
and here is the terminal record and error traceback.
Traceback (most recent call last):
File "./tools/train_net.py", line 97, in <module>
max_iters=args.max_iters)
File"/Users/jay_fu/tasks/catpatch/ClickCatpatch/tools/../lib/fast_rcnn/train.py", line 259, in train_net
roidb = filter_roidb(roidb)
File"/Users/jay_fu/tasks/catpatch/ClickCatpatch/tools/../lib/fast_rcnn/train.py", line 250, in filter_roidb
filtered_roidb = [entry for entry in roidb if is_valid(entry)]
File"/Users/jay_fu/tasks/catpatch/ClickCatpatch/tools/../lib/fast_rcnn/train.py", line 239, in is_valid
overlaps = entry['max_overlaps']
KeyError: 'max_overlaps'
I did googled and tried to delete /cache folder, and it didn't work. Next time i run the code the folder and .pkl file would be created again and then the same error came out.
some other answer said delete another folder lib/datasets/VOCdevkit-matlab-wrapper
however smallcorgi/Faster-RCNN_TF do not contain this folder, so i have no way to go.
I wonder what happened to my code and what would cause this error. I have no idea about what to do.
Anyone can give me some help, solution or whatever a piece of idea?
edit:
I run the demo on #VegardKT 's idea, the demo works good.
terminal shows succeed and figure 1-5 shows up.

cannot load mat file into python with scipy.io or hdf5storage

I have tried several ways to load my .mat file into python. I eventually want the structure in the mat file to be a numpy array. I am not sure how best to post this question, because I think I might need to upload my .mat file, as it seems there is a problem with that since the steps I am trying seemed to work for everyone else.
First, I tried:
import scipy.io as sio
mat_contents = sio.loadmat('filename.mat')
Which gave the same error message (listed below) as when I installed hdf5storage and h5py. I have matlab version 9.3. and python 3.5.3.
This also gave the same error message as below:
import hdf5storage
mat = hdf5storage.loadmat('filename.mat')
The error from both those tries is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/hdf5storage/__init__.py", line 1801, in loadmat
**keywords)
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/scipy/io/matlab/mio.py", line 135, in loadmat
MR = mat_reader_factory(file_name, appendmat, **kwargs)
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/scipy/io/matlab/mio.py", line 59, in mat_reader_factory
mjv, mnv = get_matfile_version(byte_stream)
File "/u1/usr/.conda/envs/mypython/lib/python3.5/site-packages/scipy/io/matlab/miobase.py", line 235, in get_matfile_version
maj_ind = int(tst_str[2] == b'I'[0])
IndexError: index out of range
>>>
My .mat file contains a structure 1x1 which has several fields of different sizes. I am mostly a python person, and am only using matlab to output files which I intend to analyze in python.
#hpaulj thanks, your comment made me reload the file as I think it was somehow corrupted . I cannot trace what happened to it but the solution to this question was to check the file. The steps listed above in the question are correct. (I'm new to Stackoverflow and I'm pretty sure you cannot accept a comment as an answer so hopefully you will get credit here because I tagged your name?)

TypeError when using MoviePy

In trying to learn a little about MoviePy, I copied some sample code (which I modified slightly) that cuts a 10 second section out of a movie file, overlays text on it, and writes it as a different file. The code works perfectly...only for certain files. I have two video files that I wanted to use the code on (just for practice). Both are .mov files, both are on the same drive and both of the paths are correct (I have verified them multiple times). The problem is I'm getting a TypeError on one of the files while it works perfectly on the other. Here's the code:
from moviepy.editor import *
x = int(input("When do you want the cut to start? "))
y = int(input("When do you want the cut to end? "))
video = VideoFileClip("D:\Videos\Gatlinburgh Drone River 2.MOV").subclip(x,y)
##video = VideoFileClip("D:\SF_ep\T_R_D.mov").subclip(x,y) #Path is correct
txt_clip = ( TextClip("The Red Dot episode",fontsize=70,color='white')
.set_position('center')
.set_duration(10) )
result = CompositeVideoClip([video, txt_clip])
result.write_videofile("Text on Screen.webm",fps=25)
The above example works perfectly. However, when I comment it out and uncomment the video right below it, I get the following error:
Traceback (most recent call last):
File "C:\Users\Sam\Python Projects\MoviePy\Example3c.py", line 15, in <module>
video = VideoFileClip("D:\\Seinfeld_All_Episodes\\The_Red_Dot.mov").subclip(x,y)
File "C:\Python34\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 82, in __init__
nbytes = audio_nbytes)
File "C:\Python34\lib\site-packages\moviepy\audio\io\AudioFileClip.py", line 63, in __init__
buffersize=buffersize)
File "C:\Python34\lib\site-packages\moviepy\audio\io\readers.py", line 70, in __init__
self.buffer_around(1)
File "C:\Python34\lib\site-packages\moviepy\audio\io\readers.py", line 234, in buffer_around
self.buffer = self.read_chunk(self.buffersize)
File "C:\Python34\lib\site-packages\moviepy\audio\io\readers.py", line 123, in read_chunk
self.nchannels))
TypeError: 'float' object cannot be interpreted as an integer
I'm not changing any code, I'm just pointing to a different file. I've tried the same with different files and gotten the same error. Why would it work on one and not the other? Any thoughts?
A similar question has been asked Stack Overflow before but there weren't any solid answers (at least none that applied to my particular situation).
Any help would be great. Thanks!
After searching around a bit more, I found a solution here. Line 122 of code in Readers.py was returning a float instead of an integer because it was using a single "/" instead of the double "//". I changed that line and it seems to have solved the problem. Details are at the link.
For the record, I still don't understand why it happened on certain files and not others. Nevertheless, the fix was simple.

pyExcelerator has problems reading some files

I've got a problem using pyExcelerator when reading some xls-files.
There're some python scripts i wrote, that use this library to parse XLS-files and populate database with info.
The templates for the files these scripts parse may vary and i sometimes reconfigure the script to handle them. With the one of the templates i ran into problem: pyExcelerator just raises an exception:
Traceback (most recent call last):
File "/home/* * */parsexls.py",
line 64, in handle_label
parser.parse()
File "/home/* * */parsers.py", line 335, in parse
self.contents = pyExcelerator.parse_xls(self.file_record.file,
self.encoding)
File "/usr/local/lib/python2.6/dist-packages/pyExcelerator/ImportXLS.py",
line 327, in parse_xls
ole_streams = CompoundDoc.Reader(filename).STREAMS
File "/usr/local/lib/python2.6/dist-packages/pyExcelerator/CompoundDoc.py",
line 67, in __init__
self.__build_short_sectors_data()
File "/usr/local/lib/python2.6/dist-packages/pyExcelerator/CompoundDoc.py",
line 256, in __build_short_sectors_data
dentry_start_sid, stream_size) = self.dir_entry_list[0]
IndexError: list index out of range
Some of the problem XLS-files contained empty sheets and removing of these sheets helped, but many of the files can't be handled even without empty sheets. There's nothing extraordinary in these files and they contain no formulas or pictures - just strings, numbers and dates.
As i can see, the pyExcelerator is abandoned by it's author :(
Any suggestions on fixing this issue are much appreciated.
I'm the author of xlrd. It reads XLS files and is not a fork of anything. I maintain a package called xlwt which writes XLS files and is a fork of pyExcelerator. The parse_xls functionality in pyExcelerator was deprecated to the point of removal from xlwt. Use xlrd instead.
Given the traceback that you reproduced, it looks like the file may be corrupted. What it is doing there happens well before the sheet data is parsed. What software produces these files? Can you open them with Excel or OpenOffice.org's Calc or Gnumeric? xlrd may give you a more meaningful error message. You may like to send me (insert_punctuation('sjmachin', 'lexicon', 'net')) copies of your failing file(s); please include some with and some without empty sheets. By the way, what are you using to remove empty sheets? What error message do you get from pyExcelerator when processing files with empty sheets?
You might wish to give xlrd a try... it started (I believe) as a fork of pyExcelerator, so incorporating requires few code changes, but it is actively maintained:
http://pypi.python.org/pypi/xlrd
Project website
General info, release notes and history from the documentation

Categories

Resources