Determine the camera trigger delay of see3cam - python

I have an application where I need to take a picture at a very specific moment when another program gives the command. As the camera needs to be very light, I bought the see3cam_CU135 from e-con systems. I was planning on running the camera with python and having python wait for the command from the other program. The delay between triggering and the actual exposure is essential to know and so far, I haven't been very successful in finding out what it is. Here's my setup to figure out the delay:
I run a separate script that acts as stopwatch, printing the clock of my system to my screen:
import time
while True:
print(time.time())
time.sleep(0.001)
Then I run my actual script that takes the picture of the output of the first script.
import cv2
import time
vc = cv2.VideoCapture(1)
vc.set(cv2.CAP_PROP_FRAME_WIDTH,4208)
vc.set(cv2.CAP_PROP_FRAME_HEIGHT,3120)
vc.set(cv2.CAP_PROP_EXPOSURE,-2)
if vc.isOpened(): # try to get the first frame
t1=time.time()
while int(time.time()) == int(t1):
a=0
rval, frame = vc.read()
print(t1)
cv2.imwrite("photo.png", frame)
else:
rval = False
vc.release()
If I start the script at a time of, let's say 1512638235.3549826, the program should stay in the while loop until the next full second starts, 1512638236, and then trigger the picture, right? So the time on the picture minus the full second after t1 should give me the delay.
So far so good, but here's what's weird: yesterday I ran it, and t1 was 1512579170.079795. So the script should wait almost a second, then trigger the picture. However, the picture of the stopwatch showed 1512579170.588795 (half a second before the trigger command would have been sent). Is it possible that vc.read does not actually trigger a frame, but just reads whatever frame is currently in the buffer of the camera and, therefore, returned an older frame? If so, how can I trigger a frame manually exactly when I want it?
A second question I have here is the white balance issue with OpenCV. Apparently it is not possible (yet?) to manually control white balance. I don't really care, as long as it is reproducible. How can I guarantee that auto white balance is off? I need all my pictures to be taken with exactly the same settings, as I need to be able to compare the absolute intensity in different light conditions. I can't have some auto exposure or auto white balance change the settings all the time.
Oh, one more comment: I'm not married to python or to openCV, I'm open to do it completely differently. However, the other program that will ultimately send a command to my script to take a picture has to run under windows.
I'd be really thankful for some suggestions!

Related

Webots NAO controller sequence not working

I am using the NAO_demo_python controller of the latest version of webots on linux (ubuntu 18) to flash leds and open/close the hands.
However only the last part of the sequences are executed, and not in a single simulation run.
e.g. if i ask to just open the hand and light the leds, after 1 run the lights will be on and after 5 runs of the main while loop it will be opened -reaaallly slowly-. However if i ask lit leds, open hand then close hand unlit leds then the simulation will only close hand and unlit leds.
The code should be working, it is on another person's computer. In case you want it here it is (it's really basic, it's a slight modification of nao-demo-python.py):
def run(self):
while robot.step(self.timeStep) != -1:
self.setAllLedsColor(0xff0000) #red leds on
self.setHandsAngle(0.96) #open hand
#rospy.sleep(5) #originally i am to use webots and nao with ros
print("done") #to check how many runs are made
self.setHandsAngle(0.0) #close hand
self.setAllLedsColor(0x000000) #red leds off
Also, something that may be interesting: if i ask to open close the hand N times and each time print something, all the prints will be printed at once and the simulation time jump from 0:0:0 to 0:0:20 after each main loop run (+20 at each run), else, even if the simulation runs the time doesn't flow, it jumps.
I tried to update my drivers, to take off all shadows and stuff from the simulation, as webots advices. No can do. I couldn't find something on Softbank, i can't find an active forum concerning nao and webots anymore...
I have an i5 9th gen and a GTX1050Ti.
May the problem be that the simulation time isn't 1.0x? but at most 0.05x? (after taking off shadows, effect of lights, all objects, using 1 thread etc, as explained here: https://www.cyberbotics.com/doc/guide/speed-performance)
SUMMARY: Only the last sequence of the controller is executed, and if it's a motion it takes several main loops before being fully executed. Meanwhile the time jumps from 0 to +20s after each main loop run.
May someone help me out to make all the sequence work on simulation please :)
all the prints will be printed at once
Only the last sequence of the controller is executed
It sounds like the setHandsAngle function might be asynchronous (doesn't wait for the hand to move to that point before running the next piece of code)? This could be responsible for at least some of the problems you're experiencing. In this case the rospy.sleep(5) should be replaced with robot.step(5000), so the simulator has time to move the hand before sending the next command.
Thank to your indications, this works:
self.CloseHand = Motion('../../motions/closeHand.motion')
def startMotion(self, motion):
if self.currentlyPlaying: #interrupt current motion
self.currentlyPlaying.stop()
motion.play() #start new motion
self.currentlyPlaying = motion
self.startMotion(self.CloseHand)
while not self.CloseHand.isOver():
robot.step(self.timeStep)
And in the CloseHand.motion :
#WEBOTS_MOTION,V1.0,LPhalanx1,RphalanxN (until 9)
00:00:000,Pose1,0.00,0.00 etc
with 00:00:000 the moment of execution and 0.00 the angle of the hand (phalanx by phalanx).
THANK you very much! I couldn't have realized that without your advice.
The synchronization webots/ros is still unresolved but the issue of my initial question is. Thank you!

How to only log valid key presses in simple Psychopy experiment

I'm new to Python and I'm programming a simple psychology experiment. In a nutshell, I'm presenting participants with a series of randomized images and having them to press one key if they detect a face in a given image.
One of my problems is that the program crashes when participant presses the key too fast - that is, I've noticed that the program logs responses even if the participant is pressing a key when there is no image present. Each image will only be present on the screen for 10 seconds. Participant usually takes ~0.5 second on average to make a response.
Is there a key for me to program the experiment so that that Psychopy will only log key presses ONCE, AFTER image is presented on screen? I've pasted my code below.
Thanks so much.
StimList=['Face1.png','Face2.png',]
StimList.extend(['Noise1.png','Noise2.png'])
# randomize lists:
numpy.random.shuffle(StimList)
outstr=""
for TrialNo in range(len(StimList)):
# load our image:
img=visual.ImageStim(
win=win,
image=StimList[TrialNo],
)
# draw the fixation cross and wait for trial start:
win.flip()
time.sleep(1) # wait 1 second on fixation cross
# start a trial: loop until a key has been pressed (or trial times out)
FaceDetected=0 # same as false
Responded=0 #revise
timer=core.Clock()
timer.reset()
while (not Responded) and (timer.getTime()<TimeOut): #remove not responded
img.draw()# outside loop
win.flip() #outside loop
keys=event.getKeys(keyList=['y','Y', 'n','N'], modifiers=False, timeStamped=timer)
if keys:
if (keys[0][0]=='y') | (keys[0][0]=='Y'):
FaceDetected=True
Responded=True
RT=keys[0][1]
elif (keys[0][0]=='n') | (keys[0][0]=='N'):
FaceDetected=False
Responded=True
RT=keys[0][1]
outstr=outstr+str(TrialNo)+", "+ StimList[TrialNo] +", "+str(FaceDetected)+", "+str(RT)+"\n"
print(outstr)
# first open the file:
outfile=open('tmpdata.csv', 'w')
outfile.write(outstr)
outfile.close()
win.close()
There are a bunch of Python issues with the code above, which I suspect are due to negative transfer from another programming language. For example, in Python you should use or in logical comparisons, not |, which in Python is the operator for bitwise 'OR', a different beast. Also, you might want to try out the more Pythonic for TrialNo, stimulus in enumerate(StimList): in place of for TrialNo in range(len(StimList)):, and avoid standard Python functions like time.sleep() when you could have more precise control using PsychoPy's timing classes or screen refresh counting.
But in PsychoPy API-specific terms relevant to your main question, you need to call event.clearEvents() prior to first drawing your stimulus (e.g. when you reset the trial timer). That will clear the keyboard buffer of any previously-pressed keys.
In further PsychoPy-specific hints, avoid creating objects repeatedly. e.g. the timer only needs to be created once, at the start of the script. Then you just reset it once per trial. At the moment, the reset is actually redundant, as the timer is zeroed when it is created. Timers are simple and multiple creation doesn't really impact performance, but stimuli are more complicated and you should definitely avoid creating them over and over again. e.g. here just create your image stimulus once. Then on each trial, just update its image property. That itself takes time to do, as the file needs to be read from disk. So ideally you would be doing that during the fixation stimulus period, or between trials as it is currently.
Your code really shows a few issues rather than just the one you raised in the question. hence, you might find the forum at https://discourse.psychopy.org more useful than the single question and answer format here at SO.

Seeking a simple way to display an image on an RPi and continue python execution

I'm transferring an application to an RPi, and I need a way to display full screen images using python (3) while code continues to execute. I would like to avoid delving into complicated GUI modules like Tkinter and Pygame. I just want images to fill the screen and stay there until the code replaces them or tells them to go away. If Tkinter or Pygame can do this, that would be fine, but it looks to me like they both enter loops that eventually require keyboard input. My application involves montiring sensors and external inputs, but there will be no keyboard attached. I've tried the following:
feh activated with subprocess.call (This displays the image, but the code stops executing until the image is cleared by a keystroke.
wand.display (this works but only shows a smallish window, not full screen)
fbi (couldn't get it to display an image)
xcd-open (works but opens in "Image Viewer" app in small window - no option for full screen without a mouse click)
I have not tried OpenCV. Seems like that might work, but that's a lot of infrastructure to bring in for this simple application.
For the record I've been to google and have put many hours into this. This request is a last resort.
If you want some pseudocode:
displayImage("/image_folder/image1.jpg" fullscreen = True)
time.sleep(1)
clearImage()
displayImage("/image_folder/image2.jpg" fullscreen = True)
You don't show how you tried with feh and a subprocess, but maybe try starting it in the background so it doesn't block your main thread:
subprocess.call("feh -F yourImage.jpg &", shell=True)
Note that background processes, i.e. those started with &, are using a feature of the shell, so I have set shell=True.
Then, before you display the next image, kill the previous instance:
subprocess.call("pkill feh")
Alternatively, if you know the names of all the images you plan to display in advance, you could start feh in "Slideshow mode" (by passing in all the image names on startup), and then deliver signal SIGUSR1 each time you want to advance the image:
os.kill(os.getpid(...), signal.SIGUSR1)
If the above doesn't work, please click edit under your original question and add in the output from the following commands so we can go down the framebuffer route:
fbset -fb /dev/fb0
tvservice -d edid
edidparser edid

How do can I use OpenCV imshow to display the most recent image in a loop without keyboard feedback

I am running an image processing loop with OpenCV in Python and I would like to display the most recent image mask in my imshow window. So whenever the loop calculates a new mask, it updates the imshow window (at about 6Hz). However, I can't get imshow to return control without waiting for a keyboard interrupt. Any suggestions? Is there a better library to use for this?
Without code this is a guess. BUT!
I imagine you are currently using cv2.waitKey to wait until there is a keyboard input:
cv2.waitKey(33)
if k==27: # Esc key to stop
break
What you need to do is use cv2.waitKey to wait a set amount of time, say 1 ms.
# Wait 1 milliseconds. Specifying 0 means forever, so we don't want that
cv2.waitKey(1)
What you could do is log the image to a folder in the loop's directory if it's not totally essential for the image to be displayed in real time.
cv2.imwrite('image_logs/image_' + str(image_count) + '.jpeg', image)
For instance. Keeping track of images can easily be done with a counter.
You could also use waitkey - which will delay for the number of milliseconds in the parenthesis. Sometimes I have problems with this though (I use an rpi, quite slow!) so I tend to go for the logging option.
cv2.waitKey(50) #wait for 50ms

Python PIL-ImageGrab inaccurate when capturing game pixels

I am trying to capture the pixels of a game to script a bot. I have a simple function:
def printPixel():
while True:
flags, hcursor, (x,y) = win32gui.GetCursorInfo()
print x,y,':',ImageGrab.grab().getpixel((x,y))
This prints the current x,y coords and the RGB value of that pixel. This works as expected on my desktop hovering over various icons and such, but the same function does not work in-game. Any thoughts?
edit: When I save the image to a file and perform this same operation on the saved image, it works perfectly in-game. However, it is way slower. I'd like to operate on the image in memory, and not from a file.
Video games often deal with th graphical system directly for performance reasons, so some of the typical windows apis might not work on them. Try and take a screenshot by pressing the print screen button. If that captures your screen than you can take a screenshot in python and check the image you have captured taking into account the cursor position.
To take a screenshot on windows you can check out this answer to the question Fastest way to take a screenshot with python on windows it uses the win32gui library as you are using.

Categories

Resources