I just started using Psychopy in order to create my first adaptive staircase experiment.
I tried to set up the experiment by using the Builder interface. The loop type I'm using is the staircase, not the interleaved staircase.
In the experiment, I would like to change the contrast of the image according to the participants response.
I've already designed the experiment so far that I can present the start stimulus to the participants, when the program runs. Also, the participant can respond. But the problem is, that my stimulus does not change at all after a participant responded. I've tried so many things to fix that, starting from inserting every possible stimulus manually, coding it according to the tutorial of Yentl de Kloe, but nothing is working - the simulus remains unchanged which leads to the result that the experiment runs forever, if I dont cancel it manually.
Is there anyone, who can tell me a simple (for a beginner understandable), but detailed solution how to solve this problem within the Psychopy Builder?
Thank you in advance!
Experimental Structure
Staircase Loop
Related
I am developing a trading robot in Python 3.8 and I need to know if you can give me any ideas to monitor multiple open orders simultaneously.
The situation is as follows: When you want to sell an asset, the robot can monitor conditions permanently and easily evaluate the indicators to place the sell order (limit or market).
But when you have 3, 4, 5 assets or more the situation gets complicated because the robot monitors one asset and then moves on to the next one and so on. This means that when monitoring asset # 2 (for example) asset # 5 (which is not being monitored) may be suffering a sudden strong fluctuation that makes you lose money.
My question is: Is there a way to keep an eye on all 5 assets at the same time?
Investigating thoroughly on this problem I found a way to theoretically and technically solve this problem. This is multiprocessing in Python.
The technique consists of distributing the memory of our PC in portions to execute the same process many times and at the same time.
Graphically I explain it with the following images. Python runs sequentially as we see in this image:
This has the consequence that if the monitoring loop is calculating the indicators of asset 1, then asset 130 (for example) is unsupervised and could generate considerable losses.
But if we divide the memory of our machine or use multiple cores, we can execute the same process at the same time for several assets, as I show in the following image:
In this link you can see the result of applying a multithreading (take a good look at the time) and a multiprocess: http://pythondiario.com/2018/07/multihilo-y-multiprocesamiento.html
I also leave the link of the library: https://docs.python.org/3/library/multiprocessing.html
More information and more detailed examples of multiprocessing can be seen here: https://www.genbeta.com/desarrollo/multiprocesamiento-en-python-benchmarking
It only remains to develop the code and put it to the test.
I have a number of .mp3 files which all start with a short voice introduction followed by piano music. I would like to remove the voice part and just be left with the piano part, preferably using a Python script. The voice part is of variable length, ie I cannot use ffmpeg to remove a fixed number of seconds from the start of each file.
Is there a way of detecting the start of the piano part and then know how many seconds to remove using ffmpeg or even using Python itself?.
Thank you
This is a non-trivial problem if you want a good outcome.
Quick and dirty solutions would involve inferred parameters like:
"there's usually 15 seconds of no or low-db audio between the speaker and the piano"
"there's usually not 15 seconds of no or low-db audio in the middle of the piano piece"
and then use those parameters to try to get something "good enough" using audio analysis libraries.
I suspect you'll be disappointed with that approach given that I can think of many piano pieces with long pauses and this reads like a classic ML problem.
The best solution here is to use ML with a classification model and a large data set. Here's a walk-through that might help you get started. However, this isn't going to be a few minutes of coding. This is a typical ML task that will involve collecting and tagging lots of data (or having access to pre-tagged data), building a ML pipeline, training a neural net, and so forth.
Here's another link that may be helpful. He's using a pretrained model to reduce the amount of data required to get started, but you're still going to put in quite a bit of work to get this going.
I want to gather numbers that are being output from a specific window in real time as data points.
I have a piece of equipment with an internal pressure level I would like to monitor. The only output that the software I'm using gives me is a single float from the last ~second in a box within the software. I've asked the manufacturers if there was any way of internal accessing this output and they basically told me there is none.
Individual measurements don't mean much to me, and I'd like to see the change in pressure across time. But watching this single value all day isn't very practical for me. So I want to make something that can read a specific line of text by recognizing the words or by giving exact coordinates on the screen (either works), say 'Output PSI: ##.###' and get the ##.### in python as a data point every time the number changes.
Are there modules that anyone has experience with that might be of use here?
I am automating a computer game using Sikuli as a hobby project and to hopefully get good enough to make scripts to help me at my job. In a certain small region, (20x20 pixels) one of 15 characters will appear. Right now I have these 15 images defined as variables, and then using an if, elif loop I am doing Region.exists(). If one of my images is present in the region, I assign a variable the appropriate value.
I am doing this for two areas on the screen and then based on the combination of characters the script clicks appropriately.
The problem right now is that to run the 15 if statements is taking approximately 10 seconds. I was hoping to do this recognition in closer to 1 second.
These are just text characters but the OCR feature was not reading them reliably and I wanted close to 100% accuracy.
Is this an appropriate way to do OCR? Is there a better way you guys can recommend? I haven't done much coding in the last 3 years so I am wondering if OCR has improved and if Sikuli is still even a relevant program. Seeing as this is just a hobby project I am hoping to stick to free solutions.
Sikuli operates by scanning a Screen or a part of a screen and attempting to match a set pattern. Naturally, the smaller the pattern is, the more time it will consume to match it. There few ways to improve the detection time:
Region and Pattern manipulation (bound region size)
Functions settings (reduce minimum wait time)
Configuration (amend scan rate)
I have described the issue in some more detail here.
OCR is still quite unreliable. There are ways to improve that but if you only have a limited set of characters, I reckon you will be better off using them as patterns. It will be quicker and more reliable.
As of Sikuli itself, the tool is under active development and is still relevant if it helps you to solve your problem.
I am working on image processing and computer vision project. The project is to count the number of people entering the conference. This need to done in OpenCV or Python.
I have already tried the Haar Cascade that is available in OpenCV for Upper body: Detect upper body portion using OpenCV
However, it does not address the requirement. The link of the videos is as follows:
https://drive.google.com/open?id=0B3LatSCwKo2benZyVXhKLXV6R0U
If you view the sample1 file, at 0:16 secs a person is entering the room, that would always be the way. The camera is on top of the door.
Identifying People from this Aerial Video Stream
I think there is a simple way of approaching this problem. Background subtraction methods for detecting moving objects are just what you need because the video you provided seems to only have one moving object at any point: the person walking through the door. Thus, if you follow this tutorial in Python, you should be able to implement a satisfying solution for your problem.
Counting People Entering / Exiting
Now, the first question that pops to my mind is what might I do to count if multiple people are walking through the door at separate time intervals (one person walks in 10 seconds into the video and a second person walks in 20 seconds into the video)? Here's the simplest solution to this consideration that I can think of. Once you've detected the blob(s) via background subtraction, you only have to track the blob until it goes off the frame. Once it leaves the frame, the next blob you detect must be a new person entering the room and thus you can continue counting. If you aren't familiar with how to track objects once they have been detected, give this tutorial a read. In this manner, you'd avoid counting the same blob (i.e., the same person) entering too many times.
The Difficulties in Processing Complex Dynamic Environments
If you think that there is a high level of traffic through that doorway, then the problem becomes much more difficult. This is because in that case there may not be much stationary background to subtract at any given moment, and further there may be a lot of overlap between detected blobs. There is a lot of active research in the area of autonomous pedestrian tracking and identification - so, in short, it's a difficult question that doesn't have a straightforward easy-to-implement solution. However, if you're interested in reading about some of the potential approaches you could take to solving these more challenging problems in pedestrian detection from an aerial view, I'd recommend reading the answers to this question.
I hope this helps, good luck coding!