Python and raspberry picamera object counting, counts multiple times - python

recently I've been working on a project using the raspberry pi camera OpenCV and Python to count people passing by a specific area, live, since for my usage will be easier than processing a recorded video.
Overall the code works and all, but I've been experiencing a problem with the counting part of it, that:
1 - If an object stays in the reference line, it keeps adding to the counts;
2 - Sometimes depending on the speed of the object, it is counted multiple times;
I am not an expert on python, and may be lacking the words in english to look for the proper solution, so I thought maybe someone could tell me what would be better here to solve this problem. To illustrate, here it is a gif sample:
Even tough it looks like there are more than one reference box crossing the line, it happens when only one box crosses it, as well as when the object stays on the line.
This is the code that checks if the object is crossing the line:
if (TestaInterseccaoEntrada(CoordenadaYCentroContorno,CoordenadaYLinhaEntrada,CoordenadaYLinhaSaida)):
ContadorEntradas += 1
if (TestaInterseccaoSaida(CoordenadaYCentroContorno,CoordenadaYLinhaEntrada,CoordenadaYLinhaSaida)):
ContadorSaidas += 1
I thought of using some kind of delay with time.sleep(x) on the loop, but that does not solve it obviously, and also looks bad =D.
If needed, I may post the rest of the code here, but it's here, to keep things here tidy: Code Paste
Don't mind any bad syntax or errors, part of it is not mine and the part that is, looks terrible! XD
Thanks in advance.

Cool project! It's quite a challenge to count the amount of bounding boxes that pass each line if you don't track them. It's even worse if you want to count them going both ways.
Because of this difficulty usually people prefer to track the object and then look at the trajectory to determine if the object passed the line or not.
This link can help you understand the difference. It also provides code to do the detection (but you got that part working already) and tracking (which you will need)
https://www.pyimagesearch.com/2018/08/13/opencv-people-counter/
Next to that the easiest way to track is by linking the boxes with the highest iou. A good and easy implementation can be found here:
https://github.com/bochinski/iou-tracker
Good luck!

Related

How to determine which method of OCR to use depending on images quality

I am asking a question, because my two week research are started to get me really confused.
I have a bunch of images, from which I want to get the numbers in Runtime (it is needed for reward function in Reinforcment Learning). The thing is, that they are pretty clear for me (I know that it is absolutely different thing for OCR-systems, but that's why I am providing additional images to show what I am talking about)
And I thought that because they are rather clear. So I've tried to use PyTesseract and when it does not worked out I have tried to research which other methods could be useful to me.
... and that's how my search ended here, because two weeks of trying to find out which method would be bestly suited for my problem just raised more questions.
Currently I think that the best resolve for it is to create digit recognizing model from MNIST/SVNH dataset, but is not it a little bit overkill? I mean, images are standardized, they are in Grayscale, they are small, and the numbers font stays the same so I suppose that there is easier way of modyfing those images/using different OCR method.
That is why I am asking for two questions:
Which method should be the most useful for my case, if not model
trained with MNIST/SVNH datasets?
Is there any kind of documentation/books/sources which could make the actual choice of infrastructure easier? I mean, let's say
that in future I will come up again to plan which OCR system to use.
On what basis should I make choice? Is it purely trial and error
thing?
If what you have to recognize are those 7 segment digits, forget about any OCR package.
Use the outline of the window to find the size and position of the digits. Then count the black pixels in seven predefined areas, facing the segments.

Python: Grabbing Pixel RGB

This question may be a little different, since I'm pretty much a noob at programming. I've recently started playing a Pokémon game, and I thought of an idea for a cool Python program that would be able to grab a color on a certain pixel to detect if a pokémon is shiny or not.
However, due to my very limited programming experience, I don't know what modules to use and how to use them.
So basically, here's what I want it to do:
Move the cursor to a certain pixel and click.
Detect the color of a certain pixel, and compare that to the desired color.
If it's not desirable, click a button and re-loop until it's desirable.
So, it's pretty obvious that we'll be needing a while loop, but can someone explain how to do the above three things in relatively simple terms? Thanks.
Try breaking down this list into actions and searching for answers to each action.
For example, 1 is performed by the user? So we don't have to program that.
For 2, we need to determine the location of the mouse when clicked and get the color under it.
For 3, compare the RGB values (or whatever) to the desired values for that pokemon. This is complicated because your program needs to figure out which pokemon it is checking against. There are probably pokemon where their regular color is another's shiny. Try breaking down this into even smaller problems :)
No guarantees that these links will be perfect, just trying to show how you need to break down the problem into smaller, workable chunks which you can address either directly in code or by searching for other people who have already solved those smaller problems.

What does abstraction mean in programming?

I'm learning python and I'm not sure of understanding the following statement : "The function (including its name) can capture our mental chunking, or abstraction, of the problem."
It's the part that is in bold that I don't understand the meaning in terms of programming. The quote comes from http://www.openbookproject.net/thinkcs/python/english3e/functions.html
How to think like a computer scientist, 3 edition.
Thank you !
Abstraction is a core concept in all of computer science. Without abstraction, we would still be programming in machine code or worse not have computers in the first place. So IMHO that's a really good question.
What is abstraction
Abstracting something means to give names to things, so that the name captures the core of what a function or a whole program does.
One example is given in the book you reference, where it says
Suppose we’re working with turtles, and a common operation we need is
to draw squares. “Draw a square” is an abstraction, or a mental chunk,
of a number of smaller steps. So let’s write a function to capture the
pattern of this “building block”:
Forget about the turtles for a moment and just think of drawing a square. If I tell you to draw a square (on paper), you immediately know what to do:
draw a square => draw a rectangle with all sides of the same length.
You can do this without further questions because you know by heart what a square is, without me telling you step by step. Here, the word square is the abstraction of "draw a rectangle with all sides of the same length".
Abstractions run deep
But wait, how do you know what a rectangle is? Well, that's another abstraction for the following:
rectangle => draw two lines parallel to each other, of the same length, and then add another two parallel lines perpendicular to the other two lines, again of the same length but possibly of different length than the first two.
Of course it goes on and on - lines, parallel, perpendicular, connecting are all abstractions of well-known concepts.
Now, imagine each time you want a rectangle or a square to be drawn you have to give the full definition of a rectangle, or explain lines, parallel lines, perpendicular lines and connecting lines -- it would take far too long to do so.
The real power of abstraction
That's the first power of abstractions: they make talking and getting things done much easier.
The second power of abstractions comes from the nice property of composability: once you have defined abstractions, you can compose two or more abstractions to form a new, larger abstraction: say you are tired of drawing squares, but you really want to draw a house. Assume we have already defined the triangle, so then we can define:
house => draw a square with a triangle on top of it
Next, you want a village:
village => draw multiple houses next to each other
Oh wait, we want a city -- and we have a new concept street:
city => draw many villages close to each other, fill empty spaces with more houses, but leave room for streets
street => (some definition of street)
and so on...
How does this all apply to programmming?
If in the course of planning your program (a process known as analysis and design), you find good abstractions to the problem you are trying to solve, your programs become shorter, hence easier to write and - maybe more importantly - easier to read. The way to do this is to try and grasp the major concepts that define your problems -- as in the (simplified) example of drawing a house, this was squares and triangles, to draw a village it was houses.
In programming, we define abstractions as functions (and some other constructs like classes and modules, but let's focus on functions for now). A function essentially names a set of single statements, so a function essentially is an abstraction -- see the examples in your book for details.
The beauty of it all
In programming, abstractions can make or break productivity. That's why often times, commonly used functions are collected into libraries which can be reused by others. This means you don't have to worry about the details, you only need to understand how to use the ready-made abstractions. Obviously that should make things easier for you, so you can work faster and thus be more productive:
Example:
Imagine there is a graphics library called "nicepic" that contains pre-defined functions for all abstractions discussed above: rectangles, squares, triangles, house, village.
Say you want to create a program based on the above abstractions that paints a nice picture of a house, all you have to write is this:
import nicepic
draw_house()
So that's just two lines of code to get something much more elaborate. Isn't that just wonderful?
A great way to understand abstraction is through abstract classes.
Say we are writing a program which models a house. The house is going to have several different rooms, which we will represent as objects. We define a class for a Bathroom, Kitchen, Living Room, Dining Room, etc.
However, all of these are Rooms, and thus share several properties (# of doors/windows, square feet, etc.) BUT, a Room can never exist on it's own...it's always going to be some type of room.
It then makes sense to create an abstract class called Room, which will contain the properties all rooms share, and then have the classes of Kitchen, Living Room, etc, inherit the abstract class Room.
The concept of a room is abstract and only exists in our head, because any room that actually exists isn't just a room; it's a bedroom or a living room or a classroom.
We want our code to thus represent our "mental chunking". It makes everything a lot neater and easier to deal with.
As defined on wikipedia: Abstraction_(computer_science)
Abstraction tries to factor out details from a common pattern so that
programmers can work close to the level of human thought, leaving out
details which matter in practice, but are not exigent to the problem being
solved.
Basically it is removing the details of the problem. For example, to draw a square requires several steps, but I just want a function that draws a square.
Let's say you write a function which receives a bunch of text as parameter, then reads credentials in a config file, then connects to a SMTP server using those credentials and sends a mail using that text.
The function should be named sendMail(text), not parseTextReadCredentialsInFileConnectToSmtpThenSend(text) because it is more easy to represent what it does this way, to yourself and when presenting the API to coworkers or users... even though the 2nd name is more accurate, the first is a better abstraction.
In a simple sentence, I can say: The essence of abstraction is to extract essential properties while omitting inessential details. But why should we omit inessential details? The key motivator is preventing the risk of change.
The best way to to describe something is to use examples:
A function is nothing more than a series of commands to get stuff done. Basically you can organize a chunk of code that does a single thing. That single thing can get re-used over and over and over through your program.
Now that your function does this thing, you should name it so that it's instantly identifiable as to what it does. Once you have named it you can re-use it all over the place by simply calling it's name.
def bark():
print "woof!"
Then to use that function you can just do something like:
bark();
What happens if we wanted this to bark 4 times? Well you could write bark(); 4 times.
bark();
bark();
bark();
bark();
Or you could modify your function to accept some type of input, to change how it works.
def bark(times):
i=0
while i < times:
i = i + 1
print "woof"
Then we could just call it once:
bark(4);
When we start talking about Object Oriented Programming (OOP) abstraction means something different. You'll discover that part later :)
Abstraction: is a very important concept both in hardware and software.
Importance: We the human can not remember all the things all the times. For example, if your friend speaks 30 random numbers quickly and asks you to add them all, it won't be possible for you. Reason? You might not be able to keep all those numbers in mind. You might write those numbers on a paper even then you will be adding right most digits one by one ignoring the left digits at one time and then ignoring the right most digits at the other time having added the right most ones.
It shows that at one time we the human can focus at some particular issue while ignoring those which are already solved and moving focus towards what are left to be solved.
Ignoring less important thing and focusing the most important (for the time being and in particular context) is called Abstraction
Here is how abstraction works in programming.
Below is the world's famous hello world program in C language:
//C hello world example hello.c
#include <stdio.h>
int main()
{
printf("Hello world\n");
return 0;
}
This is the simplest and usually the first computer program a programmer writes. When you compile and run this program on command prompt, the output may appear like this:
Here are the serious questions
Computer only understands the binary code how was it able to run your English like code? You may say that you compiled the code to binary using compiler. Did you write compiler to make your program work? No. You needed not to. You installed GNU C compiler on your Linux system and just used it by giving command:
gcc -o hello hello.c
And it converted your English like C language code to binary code and you could run that code by giving command:
./hello
So, for writing an application in C program, you never need to know how C compiler converts C language code to binary code. So you used GCC compiler as an abstraction.
Did you write code for main() and printf() functions? No. Both are already defined by someone else in C language. When we run a C program, it looks for main() function as starting point of program while printf() function prints output on computer screen and is already defined in stdio.h so we have to include it in program. If both the functions were not already written, we had to write them ourselves just to print two words and computers would be the most boring machines on earth ever. Here again you use abstraction i.e. you don't need to know how printf prints text on monitor and all you need to know is how to give input to printf function so that it shows the desired output.
I did not expand the answer to abstraction of operating system, kernel, firmware and hardware for the sake of simplicity.
Things to remember:
While doing programming, you can use abstraction in a variety of ways to make your program simple and easy.
Example 1: You can use a constant to abstract value of PI 3.14159 in your program because PI is easy to remember than 3.14159 for the rest of program
Example 2: You can write a function which returns square of a given number and then anyone, including you, can use that function by giving it input as parameters and getting a return value from it.
Example 3: In an Object-oriented programming (OOP), like Java, you may define an object which encapsulates data and methods and you can use that object by invoking its methods.
Example 4: Many applications provide you API which you use to interact with that application. When you use API methods, you never need to know how they are implemented. So abstraction is there.
Through all the examples, you can realize the importance of abstraction and how it is implemented in programming. One key thing to remember is that abstraction is contextual i.e. keeps on changing as per context

Pygame level bitmask checking optimisation?

Pygame offers a pretty neat bitmask colliding function for sprites, but it's really slow when you are comparing large images. I've got my level image, which worked fine when it was 400x240, but when I changed the resolution to (a lot) bigger, suddenly the game was unplayable, as it was so slow.
I was curious if there was a way to somehow crop the bitmask of a sprite, so it does not have to do as many calculations. Or, an alternative would be to split the whole stage sprite into different 'panels', and have the collision check for the closest one (or four or two, if he is on the edge of the panels). But I have no idea how to split an image into several sprites. Also, if you have any other suggestions, they would be appreciated.
I have seen the many places on the internet saying not to bother with bitmask level collision, because it is far too slow, and that I should use tile based collision instead. Although, I think bitmask would make it a lot more flexible, and it would give the opportunity for level destruction (like in the worms games), so I would prefer it if it was bitmask.
I think I've explained it enough not not need to post my code, but please tell me it you really need it.
Many thanks!
Okay, I worked out a fix for it that actually wasn't any of these things... Basically, I realised that I was only colliding one pixel with the stage at a time, so I used the Mask.get_at() function. Kinda annoyed this didn't occur to me before. Although, I've heard that using this can be quite slow, so if somebody would be willing to offer a faster alternative of get_at() that'd be nice.

Preserve code readability while optimising

I am writing a scientific program in Python and C with some complex physical simulation algorithms. After implementing algorithm, I found that there are a lot of possible optimizations to improve performance. Common ones are precalculating values, getting calculations out of cycle, replacing simple matrix algorithms with more complex and other. But there arises a problem. Unoptimized algorithm is much slower, but its logic and connection with theory look much clearer and readable. Also, it's harder to extend and modify optimized algorithm.
So, the question is - what techniques should I use to keep readability while improving performance? Now I am trying to keep both fast and clear branches and develop them in parallel, but maybe there are better methods?
Just as a general remark (I'm not too familiar with Python): I would suggest you make sure that you can easily exchange the slow parts of the 'reference implementation' with the 'optimized' parts (e.g., use something like the Strategy pattern).
This will allow you to cross-validate the results of the more sophisticated algorithms (to ensure you did not mess up the results), and will keep the overall structure of the simulation algorithm clear (separation of concerns). You can place the optimized algorithms into separate source files / folders / packages and document them separately, in as much detail as necessary.
Apart from this, try to avoid the usual traps: don't do premature optimization (check if it is actually worth it, e.g. with a profiler), and don't re-invent the wheel (look for available libraries).
Yours is a very good question that arises in almost every piece of code, however simple or complex, that's written by any programmer who wants to call himself a pro.
I try to remember and keep in mind that a reader newly come to my code has pretty much the same crude view of the problem and the same straightforward (maybe brute force) approach that I originally had. Then, as I get a deeper understanding of the problem and paths to the solution become clearer, I try to write comments that reflect that better understanding. I sometimes succeed and those comments help readers and, especially, they help me when I come back to the code six weeks later. My style is to write plenty of comments anyway and, when I don't (because: a sudden insight gets me excited; I want to see it run; my brain is fried), I almost always greatly regret it later.
It would be great if I could maintain two parallel code streams: the naïve way and the more sophisticated optimized way. But I have never succeeded in that.
To me, the bottom line is that if I can write clear, complete, succinct, accurate and up-to-date comments, that's about the best I can do.
Just one more thing that you know already: optimization usually doesn't mean shoehorning a ton of code onto one source line, perhaps by calling a function whose argument is another function whose argument is another function whose argument is yet another function. I know that some do this to avoid storing a function's value temporarily. But it does very little (usually nothing) to speed up the code and it's a bitch to follow. No news to you, I know.
It is common to assume you must give up readability to get performance.
That's not necessarily so.
You need to find out What exactly is it spending much time doing, and why?
Notice, I didn't say you need to do any measuring.
Here's an example of what I mean.
Chances are very good that you can do some simple changes to avoid waste motion, but don't fix anything until the program itself has told you what to fix.
def whatYouShouldDo(servings, integration_method=oven):
"""
Make chicken soup
"""
# Comments:
# They are important. With some syntax highlighting, the comments are
# the first thing a new programmer will look for. Therefore, they should
# motivate your algorithm, outline it, and break it up into stages.
# You can MAKE IT FEEL AS IF YOU ARE READING TEXT, interspersing code
# amongst the text.
#
# Algorithm overview:
# To make chicken soup, we will acquire chicken broth and some noodles.
# Preprocessing ingredients is done to optimize cooking time. Then we
# will output in SOUP format via stdout.
#
# BEGIN ALGORITHM
#
# Preprocessing:
# 1. Thaw chicken broth.
broth = chickenstore.deserialize()
# 2. Mix with noodles
if not noodles in cache:
with begin_transaction(locals=poulty) as t:
cache[noodles] = t.buy(noodles) # get from local store
noodles = cache[noodles]
# 3. Perform 4th-order Runge-Kutta numerical integration
import kitchensink import * # FIXME: poor form, better to from kitchensink import pan at beginning
result = boilerplate.RK4(broth `semidirect_product` noodles)
# 4. Serve hot
log.debug('attempting to serve')
return result
log.debug('server successful')
also see http://en.wikipedia.org/wiki/Literate_programming#Example
I've also heard that this is what http://en.wikipedia.org/wiki/Aspect-oriented_programming attempts to help with, though I haven't really looked into it. (It just seems to be a fancy way of saying "put your optimizations and your debugs and your other junk outside of the function you're writing".)

Categories

Resources