How to match image based on border of image? - python

This is more of a 'what is this called' kind of question than a technical one. I have recently started playing with PyAutoGUI and I am using it to do some automation. In order to improve the speed of the overall function I am trying to narrow down the 'region' in which its looking. How would I identify a region by looking for a specific "border" ignoring the internal contents. I don't really need any code, unless your just that bored, just trying to learn what techniques are available to accomplish this task or maybe some helpful keywords that I can use in my search. I am having a very difficult time finding any resources that relate to my objective.
For example, how would I match the entire dimensions of the following picture regardless of what is inside the frame.

Related

How to properly process 3D Point Cloud data in python?

I am new to this forum, so this will be my first question ever (by having used the forum for several years now :D).
What's my Problem:
I am working in a Company now, where we want to automate processes like finding lowest and/or highest points/lines in classified 3d point cloud data (such as walls, roofs, ...). So I have a classified point cloud where I don't want to draw the lines myself of the lowest and highest points of walls or roofs or anythin, but figure out how python could do the job for me instead!
What I'd like to know:
To start, I'd like to know what is the best and proper way to process point cloud data using python? I came up with several ideas by simply google searching (such as laspy, open3d, ...) but I am very confused, which one might be the library I'd need for my mission or where I should really start to put effort in learning to deal with a certain package..
So, I am grateful for your answers and suggestions (maybe there exists a similar entry which I haven't found already?).
Thanks
Max
You might want to check out the Open3D Tutorials found here.
There isn't one that does exactly what you're looking for, but pretty dam close (IMO).
I'm not interested in doing what you're doing, but if I was this is where I'd figure it out.

NLP how to go beyond simple intent finding--using context and targeting objects

Edit: Apologies if this isn't the proper space for this kind of question. Would appreciate if you can please point me to a better forum
I'm manually applying NLP rules to a chatbot.
Currently, I've a simple set of rules--actions that follow certain trigger words.
Ex: "Create the match on saturday."
This has been working for relatively simple phrases like the above example, where I would expect words like "create" and appoint it as an action word, expect "match" then appoint it as an object entity; "saturday" as a time, etc.
When I try to expand the scope of what the bot can handle, it becomes more complicated. Here's an example of something I'm trying to handle:
"Update the memo title of the match on saturday to 'Game Day'."
I'm not sure how to move forward.
I considered manually expanding(expecting) the entities, then trying a method where I still parse for action words, but if a certain threshold of varying objects is passed, then I execute a subset of that action.
For example: update will obviously signal an action, but the addition of "memo", "title", "'Game day'", signals a subset of the action as there's more to this than a simpler "create match". Then, checking the additional objects like "title" against a predetermined set of entities will narrow down the intent to update + title.
I see many holes in this logic, esp. as the complexity even slightly increases.
This led me to the field of dependency parsing.
But upon looking into dep. parsing, I wonder if this is something feasible in manual implementation.
I'll be using python code, it won't be deep learning based.
What do you think of the basic rule-based method I've outlined? Is it something that sounds workable for a domain-specific bot?
Should I be looking into using NLP libraries offered in Python such as NLTK or spaCy and use their features? My concern with using these features such as spaCy's dependency parser was that it was overkill, or too much added complexity for handling my domain specific tasks. Furthermore, and likely because I haven't yet seen effective use of it in another project, I guess I have doubts the practicality of dependency parsers outside of academia or research.
Edit: Apologies if this isn't the proper space for this kind of question. Would appreciate if you can please point me in the right direction

Matching a Pattern in a Region in Sikuli is very slow

I am automating a computer game using Sikuli as a hobby project and to hopefully get good enough to make scripts to help me at my job. In a certain small region, (20x20 pixels) one of 15 characters will appear. Right now I have these 15 images defined as variables, and then using an if, elif loop I am doing Region.exists(). If one of my images is present in the region, I assign a variable the appropriate value.
I am doing this for two areas on the screen and then based on the combination of characters the script clicks appropriately.
The problem right now is that to run the 15 if statements is taking approximately 10 seconds. I was hoping to do this recognition in closer to 1 second.
These are just text characters but the OCR feature was not reading them reliably and I wanted close to 100% accuracy.
Is this an appropriate way to do OCR? Is there a better way you guys can recommend? I haven't done much coding in the last 3 years so I am wondering if OCR has improved and if Sikuli is still even a relevant program. Seeing as this is just a hobby project I am hoping to stick to free solutions.
Sikuli operates by scanning a Screen or a part of a screen and attempting to match a set pattern. Naturally, the smaller the pattern is, the more time it will consume to match it. There few ways to improve the detection time:
Region and Pattern manipulation (bound region size)
Functions settings (reduce minimum wait time)
Configuration (amend scan rate)
I have described the issue in some more detail here.
OCR is still quite unreliable. There are ways to improve that but if you only have a limited set of characters, I reckon you will be better off using them as patterns. It will be quicker and more reliable.
As of Sikuli itself, the tool is under active development and is still relevant if it helps you to solve your problem.

Questions about approach for background music generation for songs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a project proposal for music lovers who have no knowledge in audio processing. I think the project is interesting, but I don't have clear picture on how to implement it.
The project proposal: Some people like singing, but they cannot find appropriate musical accompaniment (background music). People who can play guitar, they may sing with playing guitar (the rhythm provided by guitar is background music). The project is to achieve the similar result like playing guitar for people singing.
I think to implement this project, the following components are required:
Musical knowledge (how guitar plays as background music (maybe simple pattern will work))
signal/audio processing
Key detection
Beat detection
Chord matching
Is there any other component I missed to achieve my purpose? Any libraries can help me? The project is supposed to be completed in 1.5 month. Is it possible? (I just expect it to work like guitar beginners playing background music). For development languages, I will not use c/c++. Currently my favorite is python, but possibly use other programming language as long as it can help simplify the implementation process.
I have no musical background and just studies very basic audio processing. Any suggestions or comments are apprietiated.
Edited Information:
I tried to search auto accompaniment, and there are some software. I didn't find any open source project for it, I want to know the details on how it process audio information. If you know any open source project about it , please share your knowledge, thank you.
You might start by considering what a guitarist would have to do to successfully accompany a singer singing in a situation where that they have no prior knowledge of the key, chord progression, or rhythm of the song (not to mention its structure, style, etc.)
Doing this in real-time in a situation where the accompanist (human or computer) has not heard the song before will be difficult, as it will take some time to analyse what's being sung in order to make appropriate musical choices about the accompaniment. A guitarist or other musician having this ability in the real world would be considered highly skilled.
It sounds like a very challenging project for 1.5 months if you have no musical background. 'maybe simple pattern will work' - maybe, but there are a huge number of simple patterns possible!
Less ambitious projects might be:
record a whole song and analyse it, then render a backing (still a
lot of work!)
to create a single harmony line or part, in the same
way that vocal harmoniser effects do
generating a backing based on a
chord progression input by the user
Edit in reply to your first comment:
If you wanted to generate a full accompaniment, you will need to (as you say) deal with both the key and chord progression, and the timing (including time signature and detecting which beat of the bar is 'beat 1')
Getting this level of timing information this may be difficult, as beat detection from voice only is not going to be possible using the standard techniques used to get beat from a song (looking for amplitude peaks in certain frequency ranges).
You might still get good results by not caculating timing at all, and simply playing your chords in time with the start of the sung notes (or a subset of them).
All you would then need to do is
detect the notes. This post is about detecting pitch in python: Python frequency detection. Amplitude detection is more straightforward.
come up with an algorithm for working out the root note of the piece (and - more ambitiously - places where it changes). In some cases it may be hard to discern from the melody alone. You could start by assuming that the first note or most common note is the root.
come up with an algorithm for generating a chord progression (do a web search for 'harmonising a melody'). Obviously there is no objectively right or wrong way to do this and you will likely only be able to do this convincingly for a limited range of styles. You might want to start by assuming a limited subset of chords, e.g. I, IV, V. These should work on most simple 'nursery rhyme' style tunes.
Of course if you limit yourself to simple tunes that start on beat one, you might have an easier time working out the time signature. In general I think your route to success will be to try to deal with the easy cases first and then build on that.

Object extraction from Images with python

I want to object extraction from Images. for example i want to count of human in a picture or find similar picture in great data base(like google example) or finding field of picture (Nature of Office or Home) and etc.
did you know any python library or module for do this work.
If you can link me
tutrial or instruction to this work
similar example project
Perhaps using simplecv?
Here is a video of a presenter at pycon who runs through a quick tutorial of how to use simplecv. About half-way through, at 9:50, she demonstrates how to detect faces in an image, which you might be able to use for your project.
Try this out: https://github.com/CMU-Perceptual-Computing-Lab/openpose
I used it to detect multiple persons and extract the skeleton joints. It's also a little sensitive, so post-processing needs to be done to remove outliers caused due to reflections on the floor, glass walls, etc.

Categories

Resources