I am actually doing a smart RC car demo with Raspberry Pi 3B, some sensors and actuators.
Basically, the car should run autonomously in a controlled indoor environment: move autonomously while tracking a line on the ground and detecting, avoiding an obstacle on its way and taking a picture of the obstacle, etc.
My first idea of architecture looks like below (you can ignore the camera part):
However, I think it is far from optimal of using a file in the middle to communicate between different processes.
As the sensors have different frequency to launch, I think multi-processing should probably solve the problem.
I did some search but am still not clear on how to architect it with multi-processing.
Any advices would be really appreciated.
best regards,
Related
Is there any mediapipe module that detects hands WITHOUT detecting their pose?
The reason is that the examples I find on the internet end up running slow on my computer, and I don't need to know the position of the fingers, just the hand.
I tried to google it but all the videos/tutorials I find are the same code (which detects each landmark in hand). I'm not much in the ML area and I don't know if there isn't a ready-made model for it or if I didn't know the correct terms to search.
As an addendum if anyone knows some way to use GPU acceleration on Windows it would also work, as I believe it would improve FPS. Everything I found said that this is only possible on Linux, so I gave up and thought about looking for a simpler model that consumes less CPU.
For my Msc thesis I want to apply multi-agent RL to a bus control problem. The idea is that the busses operate on a given line, but without a timetable. The busses should have bus stops where passengers accumulate over time and pick them up, the longer the interval between busses, the more passengers will be waiting at the stop (on average, it's a stochastic process). I also want to implement some intersections where busses will have to wait for a green light.
I'm not sure yet what my reward function will look like, but it will be something along the lines of keeping the intervals between busses as regular as possible or minimising total travel time of the passengers.
The agents in the problem will be the busses, but also the traffic lights. The traffic lights can choose when to show green light for which road: apart from the busses they will have other demand as well that has to be processed. The busses can choose to speed up, slow down, to wait longer at a stop or to continue on normal speed.
To be able to put this problem in a RL framework I will need an enviroment and suitable RL algorithms. Ideally I would have a flexible simulation environment to re-create my case study bus line and connect this to of-the-shelf RL algorithms. However, so far I haven't found this. This means I may have to connect a simulation environment to something like an OpenAI gym myself.
Does anyone have advice for which simulation environment may be suitable? And if it's possible to connect this to of-the-shelf RL algorithms?
I feel most comfortable with programming in Python, but other languages are an option as well (but this would mean considerable extra effort from my side).
So far I have found the following simulation environments that may be suitable:
NetLogo
SimPy
Mesa
MATSim (https://www.matsim.org)
Matlab
CityFlow (https://cityflow-project.github.io/#about)
Flatland (https://www.aicrowd.com/challenges/neurips-2020-flatland-challenge/)
For the RL algorithms the options seem to be:
Code them myself
Create the environment according to the OpenAI gym API guidelines and use the OpenAI baselines algorithms.
I would love to hear some suggestions and advice on which environments may be most suitable for my problem!
You can also check SUMO as a traffic simulator and RLLib library for multi-agent reinforcement learning.
im currently working on a small startup for some extra cash, im using qt 5.13 and my aim is to develop a small camera with functions like dimensional measurement based on the lense and height or edge detection and that sort of thing, these i will be developing in python with the use of opencv.
Anyways my question is this, before i dive in too deep to go back, is it possible to use qt, to run a (Pi)camera fullscreen, no edges and just have a small transparent button on a corner to be the settings? Like, this for sake of the UX, i wouldnt like to have borders or to need to cut screen size to add features.
In Qt, all cameras will be equal, so you can prototype it on your PC first, and it should work on RPi. Using QML, it should work just fine - it's a compositing framework that uses the GPU for composition, and RPi 4 has plenty enough GPU bandwidth to deal with it. QML supports semitransparent controls.
You may wish to see various augmented reality (AR) measurement applications available for iOS and Android (even just the Ruler included in iOS 12). You might be entering a crowded market. Those apps are not perfect, and there are simple cases that throw them off - like measuring the size of a window on a large flat wall on the side of a long but narrow room - there's too much bloom and not enough detail on the wall to have a stable depth reference, even on the best iPhone available.
If you can write software that is extremely robust, then you'll have a real market differentiator - but it won't generally be easy, and OpenCV is only a low-level building block. It's not unthinkable that you'll need some GPU-oriented computational framework instead (OpenCV provides some of it, but it's far from general).
Also, 99% of the UX will be the software, and that software should be very much portable by design, so investing anything in hardware before your software is good is a waste. Just as you suggest, a RPi 4 will do great for prototype hardware - but there's a catch that you may be limiting yourself unnecessarily by tying it all to a platform. There's so many platforms that settling on RPi when there's no market need for that is not sensible, I don't think.
You could use one of a multitude of WiFi battery-powered cameras with your PC: this will let you concentrate on the algorithms and functionality without having to mess with cross-compilation for RPi, etc. It'll also let you develop good software even if an RPi won't have enough bandwidth to do this realtime processing. There are faster platforms, so it'd be best not to get invested in any hardware at all. The quality of the camera will matter a lot, though, so you will want to start with a good WiFi camera, get things perfect, and then downgrade and see how far you can go. Even professional cameras provide WiFi streaming, so you can use a camera as good as you can afford. It will make things simpler to start with.
Also, don't spend time on the UI much before you get the core functionality solid. You'll be designing a "Debug" UI, and you perhaps should keep that one available but hidden in the final product.
I am learning SUMO from beggining, I read and learned most of tutorials from: http://sumo.dlr.de/wiki/Tutorials . What I want to do now is to make Cars slow down when there is a Traffic on a Road. I only know how to change the speed limit after a certain Time from here: http://sumo.dlr.de/wiki/Simulation/Variable_Speed_Signs . Do you know how can I change the speed limit when there is Traffic? I think that changing the value of speed Signs is the best Idea here, but I don't how do it.
There is no such thing as event triggered speed signs in SUMO, so you probably need to do that via TraCI. The easiest way is probably to start with the TraCI tutorial for trafficlights and then use functions like traci.edge.getLastStepOccupancy to find out whether vehicles are on the edge in question and then traci.edge.setMaxSpeed to set a new speed.
I have a small 12 volt board camera that is placed inside a bee hive. It is lit with infrared LEDs (bees can't see infrared). It sends a simple NTSC signal along a wire to a little TV monitor I have. This allows me to see the inside of the hive, without disturbing the bees.
The queen has a dot on her back such that it is very obvious when she's in the frame.
I would like to have something processing the signal such that it registers when the queen is in the frame. This doesn't have to be a very accurate count. Instead of processing the video, it would be just as fine to take an image every 10 seconds and see if there is a certain amount of brightness (indicating that the queen is in frame).
This is useful since it helps bee keepers know if the queen is alive (if she didn't appear for a number of days it could mean something is wrong).
I would love to hear suggestions for inexpensive ways of processing this video, especially with low power consumption. Raspberry pi? Arduino?
Camera example:
here
Sample video (no queen in frame):
here
First off, great project. I wish I was working on something this fun.
The obvious solution here is OpenCV, which will run on both Raspberry Pi (Linux) and the Android platform but not on an Arduino as far as I know. (Of the two, I'd go with Raspberry Pi to start with, since it will be less particular in how you do the programming.)
As you describe it, you may be able to get away with less robust image processing tools, but these problems are rarely as easy as they seem at first. For example, it seems to me that the brightest spot in the video is (what I guess to be) the illuminating diode reflecting off the glass. But if it's not this it will be something else, so don't start the project with your hands tied behind your back. And if this can't done with OpenCV, it probably can't be done at all.
Raspberry Pi computers are about $50, OpenCV is free, so I doubt you'll get much cheaper than this.
In case you haven't done something like this before, I'd recommend not programming OpenCV directly in C++ for something that's exploratory like this, and not very demanding either. Instead, use, for example, the Python bindings so you can explore the images interactively.
You also asked about Arduino, and I don't think this is such a good choice for this type of project. First, you'd need extra hardware, like a video shield (e.g., http://nootropicdesign.com/ve/), adding to the expense. Second, there aren't good image processing libraries for the Arduino, so you'd be doing everything from scratch. Third, generally speaking, debugging a microcontroller program is more difficult.
I don't have a good answer about image processing, but I know how to make it much easier. When you mark the queen, throw some retro-reflecting beads on the paint to get a much higher light return.
I think you can simply mix the beads in with your paint -- use 1 part beads to 3 parts paint by volume. That said, I think you'll get better results if you pour beads onto the surface of the wet paint when marking the queen. I'd pour a lot of beads on to ensure some stick (you can do it over a bowl or bag to catch all the extra beads.
I suggest doing some tests before marking the queen -- I've never applied beads before, but I've worked with retroreflective tape and paint, and it will give you a significantly higher light return. How much higher strongly depends (i.e. I don't have a number) but I'm guessing at least 2-5 times more light -- enough that your camera will saturate when it sees the queen with current exposure settings. If you set a trigger on saturation of some threshold number of pixels (making sure few pixels saturate normally) this should give you a very good signal to noise ratio that will vastly simplify image processing.to
[EDIT]
I did a little more digging, and there are some important parameters to consider. First, at an index of 1.5 (the beads I'd linked before) the beads won't focus light on the back surface and retro-reflect, they'll just act like lenses. They'll probably sparkle and reflect a bit, but you might be better off just adding glitter to the paint.
You can get VERY highly reflective tape that has the right kind of beads AND has a reflective coating on the back of the beads to reflect vastly more light! You'll have to figure out how to glue a bit of tape to a queen to use it, but it might be the best reflection you can get.
http://www.amazon.com/3M-198-Scotch-Reflective-Silver/dp/B00004Z49Q
You can also try the beads I recommended earlier with an index of refraction of 1.5. I'd be sure to test it on paper against glitter to make sure you're not wasting your time.
http://www.colesafety.com/Reflective-Powder-Glass-Beads-GSB10Powder.htm
I'm having trouble finding a source for 1lb or less glass beads with 1.9+ refractive index. I'll do more searching and I'll let you know if I find a decent source of small quantities.