im currently working on a small startup for some extra cash, im using qt 5.13 and my aim is to develop a small camera with functions like dimensional measurement based on the lense and height or edge detection and that sort of thing, these i will be developing in python with the use of opencv.
Anyways my question is this, before i dive in too deep to go back, is it possible to use qt, to run a (Pi)camera fullscreen, no edges and just have a small transparent button on a corner to be the settings? Like, this for sake of the UX, i wouldnt like to have borders or to need to cut screen size to add features.
In Qt, all cameras will be equal, so you can prototype it on your PC first, and it should work on RPi. Using QML, it should work just fine - it's a compositing framework that uses the GPU for composition, and RPi 4 has plenty enough GPU bandwidth to deal with it. QML supports semitransparent controls.
You may wish to see various augmented reality (AR) measurement applications available for iOS and Android (even just the Ruler included in iOS 12). You might be entering a crowded market. Those apps are not perfect, and there are simple cases that throw them off - like measuring the size of a window on a large flat wall on the side of a long but narrow room - there's too much bloom and not enough detail on the wall to have a stable depth reference, even on the best iPhone available.
If you can write software that is extremely robust, then you'll have a real market differentiator - but it won't generally be easy, and OpenCV is only a low-level building block. It's not unthinkable that you'll need some GPU-oriented computational framework instead (OpenCV provides some of it, but it's far from general).
Also, 99% of the UX will be the software, and that software should be very much portable by design, so investing anything in hardware before your software is good is a waste. Just as you suggest, a RPi 4 will do great for prototype hardware - but there's a catch that you may be limiting yourself unnecessarily by tying it all to a platform. There's so many platforms that settling on RPi when there's no market need for that is not sensible, I don't think.
You could use one of a multitude of WiFi battery-powered cameras with your PC: this will let you concentrate on the algorithms and functionality without having to mess with cross-compilation for RPi, etc. It'll also let you develop good software even if an RPi won't have enough bandwidth to do this realtime processing. There are faster platforms, so it'd be best not to get invested in any hardware at all. The quality of the camera will matter a lot, though, so you will want to start with a good WiFi camera, get things perfect, and then downgrade and see how far you can go. Even professional cameras provide WiFi streaming, so you can use a camera as good as you can afford. It will make things simpler to start with.
Also, don't spend time on the UI much before you get the core functionality solid. You'll be designing a "Debug" UI, and you perhaps should keep that one available but hidden in the final product.
Related
I am actually doing a smart RC car demo with Raspberry Pi 3B, some sensors and actuators.
Basically, the car should run autonomously in a controlled indoor environment: move autonomously while tracking a line on the ground and detecting, avoiding an obstacle on its way and taking a picture of the obstacle, etc.
My first idea of architecture looks like below (you can ignore the camera part):
However, I think it is far from optimal of using a file in the middle to communicate between different processes.
As the sensors have different frequency to launch, I think multi-processing should probably solve the problem.
I did some search but am still not clear on how to architect it with multi-processing.
Any advices would be really appreciated.
best regards,
I have a series of points (x,y,z) that I would like to plot as a vector in a 3D plane. Something like this.
I am successfully using QCustomPlot elsewhere, but the documentation says it cannot be used for 3D plots. Googling turned up QwtPlot3D, but it hasn't been maintained since 2007, as far as I can tell, and I don't want to run into any problems since I'm using Qt5. I was also looking at QtCharts but can't seem to find any example of plotting x,y,z data points.
Does anyone have tips for including a 3D graph in my C++/Qt application? Is there a tool that would work better with Python with Qt, rather than C++? Or another technology entirely? This graph will be part of a larger UI.
This might help, though I haven't used it:
http://doc.qt.io/QtDataVisualization/
Spend a little time looking into OpenGL. To display OpenGL scenes in Qt you would use QGLWidget (for Qt 4.x) or QOpenGLWidget (for Qt 5.x). OpenGL allows you to write graphics that run on a GPU card, meaning you can tap into the same horsepower used for 3D video games. Given time and inclination, you can build up a good 3D graphics library.
https://www.opengl.org/
http://doc.qt.io/qt-5/qopenglwidget.html
The Qt tutorials can help, but you'll also want to read other OpenGL tutorials. Here are some tutorials targeting older versions of Qt:
ftp://ftp.informatik.hu-berlin.de/pub1/Mirrors/ftp.troll.no/QT/pub/developerguides/qtopengltutorial/OpenGLTutorial.pdf
http://www.decom.ufop.br/sibgrapi2012/eproceedings/tutorials/t3-survey_paper.pdf
Tutorials tend to start with "immediate mode" examples, meaning the CPU is continually involved with updating data and writing that data to the GPU. As soon as you grasp the basics you'll want to implement "retained mode" code, meaning (very loosely) that the GPU manages the data and the need for CPU resources is minimized.
All that said, getting into OpenGL is a commitment. If you want the user to be able to change the viewpoint of the chart, or zoom in/out, or mouse over a plot to check individual values, etc., then it will take some time to implement. For a standard that's so widely use it's odd that the documentation and available textbooks aren't better--don't expect to find the OpenGL textbook equivalent of Kernighan & Ritchie or the Perl camel book.
There may be some Qt 3D graphing project somewhere that enjoys active development, and with luck maybe some other SO user will know about one.
I'm working on a basic Tkinter image viewer. After fiddling with a lot of the code, I have two lines which depending on the operation that triggered the refresh are 50-95% of the execution time.
self.photo = ImageTk.PhotoImage(resized)
self.main_image.config(image=self.photo)
Is there a faster way to display a PIL/Pillow Image in Tkinter?
I would recommend testing the 4000x4000 images you're concerned about (which is twice the resolution of a 4K monitor). Use a Google advanced search to find such images, or use a photo/image editor to tile an image you already have. Since I don't think many people would connect a 4K (or better) monitor to a low-end computer, you could then test the difference between scaling down a large image and simply displaying it, so if most of the work is in resizing the image you don't have to worry as much about that part.
Next, test the individual performance of each of the two lines you posted. You might try implementing some kind of intelligent pre-caching, which many programs do: resize and create the next photo as the user is looking at the current one, then when the user goes to the next image all the program has to do is reconfigure self.main_image. You can see this general strategy at work in the standard Windows Photo Viewer, which responds apparently instantaneously to normal usage, but can have noticeable lag if you browse too quickly or switch your browsing direction.
Also ask your target userbase what sort of machines they're using and how large their images are. Remember that reasonable "recommended minimum specs" are acceptable.
Most importantly, keep in mind that the point of Python is clear, concise, readable code. If you have to throw away those benefits for the sake of performance, you might as well use a language that places more emphasis on performance, like C. Make absolutely sure that these performance improvements are an actual need, rather than just "my program might lag! I have to fix that."
I have a small 12 volt board camera that is placed inside a bee hive. It is lit with infrared LEDs (bees can't see infrared). It sends a simple NTSC signal along a wire to a little TV monitor I have. This allows me to see the inside of the hive, without disturbing the bees.
The queen has a dot on her back such that it is very obvious when she's in the frame.
I would like to have something processing the signal such that it registers when the queen is in the frame. This doesn't have to be a very accurate count. Instead of processing the video, it would be just as fine to take an image every 10 seconds and see if there is a certain amount of brightness (indicating that the queen is in frame).
This is useful since it helps bee keepers know if the queen is alive (if she didn't appear for a number of days it could mean something is wrong).
I would love to hear suggestions for inexpensive ways of processing this video, especially with low power consumption. Raspberry pi? Arduino?
Camera example:
here
Sample video (no queen in frame):
here
First off, great project. I wish I was working on something this fun.
The obvious solution here is OpenCV, which will run on both Raspberry Pi (Linux) and the Android platform but not on an Arduino as far as I know. (Of the two, I'd go with Raspberry Pi to start with, since it will be less particular in how you do the programming.)
As you describe it, you may be able to get away with less robust image processing tools, but these problems are rarely as easy as they seem at first. For example, it seems to me that the brightest spot in the video is (what I guess to be) the illuminating diode reflecting off the glass. But if it's not this it will be something else, so don't start the project with your hands tied behind your back. And if this can't done with OpenCV, it probably can't be done at all.
Raspberry Pi computers are about $50, OpenCV is free, so I doubt you'll get much cheaper than this.
In case you haven't done something like this before, I'd recommend not programming OpenCV directly in C++ for something that's exploratory like this, and not very demanding either. Instead, use, for example, the Python bindings so you can explore the images interactively.
You also asked about Arduino, and I don't think this is such a good choice for this type of project. First, you'd need extra hardware, like a video shield (e.g., http://nootropicdesign.com/ve/), adding to the expense. Second, there aren't good image processing libraries for the Arduino, so you'd be doing everything from scratch. Third, generally speaking, debugging a microcontroller program is more difficult.
I don't have a good answer about image processing, but I know how to make it much easier. When you mark the queen, throw some retro-reflecting beads on the paint to get a much higher light return.
I think you can simply mix the beads in with your paint -- use 1 part beads to 3 parts paint by volume. That said, I think you'll get better results if you pour beads onto the surface of the wet paint when marking the queen. I'd pour a lot of beads on to ensure some stick (you can do it over a bowl or bag to catch all the extra beads.
I suggest doing some tests before marking the queen -- I've never applied beads before, but I've worked with retroreflective tape and paint, and it will give you a significantly higher light return. How much higher strongly depends (i.e. I don't have a number) but I'm guessing at least 2-5 times more light -- enough that your camera will saturate when it sees the queen with current exposure settings. If you set a trigger on saturation of some threshold number of pixels (making sure few pixels saturate normally) this should give you a very good signal to noise ratio that will vastly simplify image processing.to
[EDIT]
I did a little more digging, and there are some important parameters to consider. First, at an index of 1.5 (the beads I'd linked before) the beads won't focus light on the back surface and retro-reflect, they'll just act like lenses. They'll probably sparkle and reflect a bit, but you might be better off just adding glitter to the paint.
You can get VERY highly reflective tape that has the right kind of beads AND has a reflective coating on the back of the beads to reflect vastly more light! You'll have to figure out how to glue a bit of tape to a queen to use it, but it might be the best reflection you can get.
http://www.amazon.com/3M-198-Scotch-Reflective-Silver/dp/B00004Z49Q
You can also try the beads I recommended earlier with an index of refraction of 1.5. I'd be sure to test it on paper against glitter to make sure you're not wasting your time.
http://www.colesafety.com/Reflective-Powder-Glass-Beads-GSB10Powder.htm
I'm having trouble finding a source for 1lb or less glass beads with 1.9+ refractive index. I'll do more searching and I'll let you know if I find a decent source of small quantities.
I'm starting up game programming again. 10 years ago I was making games in qbasic and I havn't done any game programming since, so I am quite rusty. I have been programming all the time though, I am web developer/DBA/admin now. I have several questions, but I'm going to limit it to one per post.
The game I am working on is going to be large, very large world. It is going to be somewhat like URW, but a even larger world and more like an 'RPG'.
What I have been trying to decide, is what is the best way layout the map, save it, and access it. I thought up the idea of using sqlite to store the data. I could then even use the sqlite db as the save file for the game, nice and easy.
Anyone have any tips about how I should go about this or ideas for other storage methods?
Here are the requirements for my game:
I need full random access to spot in the game world (the NPC's, monsters, animals will all be active all the time).
I'm using Stackless Python 3.1, options are quite limited unless I do a lot of work.
Needs to be able to handle a very large world.
Concurrency support would be a plus, but I don't think I will need it.
Don't mess with relational databases unless you're forced to use them by external factors.
Look at Python's pickle, shelve.
Shelve is fast and scales well. It eliminates messy conversion between Python and non-Python representation.
Edit.
More important advice. Do not get bogged down in technology choices. Get the locations, items, characters, rules, etc. to work. In Python. As simply and correctly as possible.
Do not burn a single brain calorie on anything but core model, correctness, and a basic feature set to prove things work.
Once you have a model that actually works, and you can exercise with some sophisticated unit tests, then you can make technology choices.
Once you have a model, you can meaningfully scale it up to millions of locations and see what kind of storage is required. The model can't change -- it's the essence of the application. Only the access layer and persistence layer can change to adjust performance.
It sounds like what you're asking for is a type of spacial index. For a very large 2d game I'd recommend using a quadtree. Quadtree works well when you have a large area and activity tends to happen in localized regions of the area, which is the case for most RPG-type games. It will keep your storage requirements low and hopefully speed up collision detection as well.
As for saving the game, things like player and monster stats can go into a database, if you're worried about those changing often. For the actual level layout I'd recommend using a binary file format specific to your game. There's not many database-type queries you usually need to perform on the level layout and you can make great optimizations using your own format. I wouldn't know how to begin storing a quadtree-like format in a database (although I'm sure its possible).
I am using non-relationnal database to store big amounts of datas. If you can work on a 64 bits hardware, MongoDB with its Python driver is really very good. I do not know if this is ok with Stackless, but it is a possiblity.