Pyautogui - Problems with Changing Screenshots - python

ok, I've got into programming with python and thus far was having a fair amount of success. I've typed up a program that uses pyautogui to automates atask I need to do on a monthly basis.
I took Screenshots of where I needed the mouse to click and when all was done I had a working program that searched the screen for the button to clicked, controlled the mouse that location, and printed out the report I needed. So, all I needed to do was plug it into the task scheduler and it would do the work for me!
Several days afterwards, I decided to go ahead and schedule it. I ran the program again, and it crashed! Long Story short, the screen shots didn't match. I took a screen shot again, and zoomed both images 800% in Paint, and check the pixel next to the "I" in The two different images and sure enough the rgb values are different.
I tried several other places to, and while they looked the same... The rgb values are different by maybe one or two points! I'm curious as to why is this happening!

Use confidence, default value is 0.999. Reason is pyscreeze is actually used by pyautogui which has the confidence value which most likely represents a percentage from 0% - 100% for a similarity match. Looking through the code with my amateur eyes reveals that OpenCV and NumPy are required for confidence to work otherwise a different function would be used that doesn't have the confidence value.
for example:
by doing pyautogui.locateCenterOnScreen('foo.png', confidence=0.5) will set your confidence to 0.5, which means 50%.

Related

Simple animation in Python using wxPython objects

I'm writing a gravity simulator in Python 2.7 and at the moment I finished the mathematical part. I'm trying to find a way to display the results while the simulation is running, it should consist of some colored circles representing the various bodies and optionally some curved lines to represent orbits, that can be shown or hidden while the simulation is running.
I pictured a way to obtain this result, but I can't seem to find a way to even start.
My idea is to use wxPython. The window should be divided into four sectors (2x2), the first three contain the simulation viewed in the XY, XZ and YZ planes, while the last contains the controls (start/stop simulation, show/hide orbits, ...).
The controls should not be a problem, I just need a way to display the animation. So how can I display moving circles and curved lines using wxPython objects? Which objects should I use? I don't need much more than a couple names, the rest should follow easily.
I know that an animation purely with wxPython will probably require some multithreading, I'm already prepared for that. I also want to stress that I need the animation to be shown while the simulation is running, not after, because the simulation has no definite end at the moment: I don't know when to stop it if I don't see the results first.
If it's somehow useful, I'm using Ubuntu Linux 17.10.
Edit: Since I was asked to choose one approach, I discarded Matplotlib because it requires two different windows. Hope this helps.

Adding visual time markers to the player bar of a video

I'm trying to program an experiment in which I want to find out how humans cognitively segment movement streams. For example, if the movement stream could is a person climbing a flight of stairs, each step might be a single segment.
The study is bascially a replication of this one here, but with another set of stimuli: http://dl.acm.org/citation.cfm?doid=2010325.2010326
Each trial should be structured like the following:
Present a video of a motion stream. Display a bar beneath the video that has a marker that moves in sync with the current time of the video (very similar to GUI of a video player).
Present that video again, but now let the participant add stationary markers to the bar beneath the video by pressing a key. The marker is supposed to be placed at the time point in the video bar that corresponds with the time the buttom was pressed (e.g. when the video is 100 seconds long and the buttom was pressed 10 seconds into the video, it should be placed at the 10% mark of the bar).
My instructor suggested programming the whole thing using PsychoPy. PsychoPy currently only supports Python 2.7.
I've looked into the program and it looks promising. One can display a video easily and the rating scale class is similar to the bar we want to implement. However, several features are missing, namely:
One can only set a single marker, subjects should be able to set multiples
As mentioned in point (1) we want to have a marker that moves in synch with the video.
When a key press occurs a marker should be placed at the point in the bar that corresponds with the current time point in the video.
Hence my questions: Do you have any tips for implementing the features described above using the PsychoPy module?
I don't know how much this gets into recommendation question territory, but in case you know of a module for writing experiment GUIs that has widgets with the features we want for this experiment I would be curious to hear about them.
PsychoPy is a good choice for this. The rating scale however (as you note) is probably not the right tool for creating the markers. You can make simple polygon shapes though, which could serve as your multiple markers as well as the continuous time indicator.
e.g. you could make a polygon stimulus with three vertices (to make a triangle indicator) and set its location to be something like this (assuming you are using normalised coordinates):
$[((t/movie_duration) * 2 - 1) , -0.9]
t is a Builder variable that represents the time elapsed in the current trial in seconds. The centre of the screen is at coordinates [0, 0]. So the code above would make the pointer move smoothly from the left hand edge of the screen to the right, close to the bottom edge of the screen, reaching the right hand edge once the move ends. Set the polygon's position field to update every frame so that the animation is continuous.
movie_duration is a placeholder variable for the duration of the movie in seconds. You could specify this in your conditions file, or you can query the movie component to get its duration I think, something like:
$[((t/movie_stim_name.duration()) * 2 - 1) , -0.9]
You could leave markers on the screen in response to keypresses in a similar way, but this would require a little Python code in a code component.

Code changes - Python - Piphone - Raspberry Pi

Right now I'm working hard to finish a project named Pihpone; I've been following the adafruit tutorial and I've also bought all the items that were suggested by them
The problem is that..the code was written for 2,8" while I have a 3.5" screen
I've succeeded in making some changes like modifying the 320x240 with 480x320
Still not enough but I dont know what to do further; pls come with any suggestion;
Here are the screenshots:
Before
After
https://github.com/climberhunt/Piphone/archive/master.zip
From there you can download the code made by Adafruit; you can find the code in piphone.py.
The code in piphone.py appears to be using the pygame module to do the graphics. The problem is all the hardcoded coordinates and sizes for things like the Buttons. To fix this, the values must be computed at run-time and depend on the display resolution. Line 255 sets the display mode.
screen = pygame.display.set_mode(modes[0], FULLSCREEN, 16)
After doing that, you can get a video display information object from pygame.display.Info() and obtain the width and height of the current video mode, then use those values to scale and position the buttons.
You may also need to create different sets of image files for the various sizes of display you want the program to support.

How can I track extremly slow objects

Using OpenCV and python 2.7 I have written a script that detects and marks movement in a stream from a webcam. In order to detect movement in the image I use the RunningAvg function in openCV like so. . .
cv.RunningAvg(img, running_avg, 0.500, None)
cv.AbsDiff(img, running_avg, difference)
The overall script works great but I'm having a difficult time fine tuning it to pickup subtle motions(breathing for instance). I want to be able to target slow movements breathing specifically. I want to be able to do this without knowing things like color or size of targets ahead of time. I'm wondering if there is another method that is more suited to picking up subtle movements.
I think you should probably change the running average parameter way down to like 0.01
because 0.5 means the running average is half of the last frame.
This is assuming that breathing is the only motion in the frame. If there larger motions or the camera is moving you are going to need a more adaptive baseline.

Python (Jython) Playing notes from pixels in picture

This is from a class assignment:
This program is about listening to colors. We will treat pictures as piano scores.
Write a function called listenToPicture that takes one picture as an argument. It first shows the picture. Next, it will loop through every 4th pixel in every 4th row and do the following. It will compute the total of the red, green and blue levels of the pixel, divide that by 9, then add the result to 24. That number will be the note number played by playNote.
That means that the darker the pixel, the lower the note; the lighter the pixel, the higher the note. It will play that note at full volume (127) for a tenth of a second (100 milliseconds). Every time it moves to a new row, it prints out the row number (y value) on the console.
Your main function will ask the user to select a file with a picture. It will print the number of notes to be played (which is the number of pixels in the picture divided by 16; why?). It will then call the listenToPicture function.
Ok I edited in what I have so far and the only thing I haven't figured out (I believe) is how to print the number of notes in the main function. By the way, thanks to everyone who helped. You guys are amazing. Is there a place to donate to this site?
def main():
pic=makePicture(pickAFile())
show (pic)
listenToPicture(pic)
def listenToPicture(pic):
w=getWidth(pic)
h=getHeight(pic)
for y in range(0,h,4):
printNow(str(y))
for x in range (0,w,4):
px=getPixel(pic,x,y)
r=getRed(px)
g=getGreen(px)
b=getBlue(px)
tot=((r+g+b)/9)+24
playNote(tot,100,127)
Robbie is right for the width/height for loops.
The loop you are using to get the pixels and play the notes looks as if it is getting ALL the pixels and playing them all every time you get a unique x and y. What you should be doing is be getting the pixel at (x,y) then pulling out the rgb values and calling play note on that. You really shouldn't even need the 3rd for loop. You're not too far off. Try writing the problem out in logical steps in plain English. I find that helps a ton before I start coding.
Good Luck.
You asked about similar things before. Well, since you didn't put any code in about actually retrieving the pixel value, I'll assume that you still aren't able to do that. I know this is going way beyond your question, but last time you were pretty vague about your question and indicated that you needed more help than just what you had asked. If any of this is not necessary then just ignore it. I'm just trying to offer some advice and you can take it or leave it.
In case you haven't figured out how to read a pixel, I recommend using PIL. It has functions for opening images documented here. Then you can access a pixel in the image by its x and y value using getpixel which is documented on the same page.
For playing the note I would recommend looking into the PyAudio module and just making your own sinusoids of various frequencies (depending on the magnitude of the pixel) that you write to an open audio stream. There might be better packages for this part, but this is what I have used in my small adventures in Python audio.
For the audio stuff, I would try just outputting a sound at a fixed frequency before trying to actually emit a varying frequency.
Edit:
Your loops look better now so I took out my stuff about your loops.

Categories

Resources