How to draw picture like this using Python? - python

I am working on a project which scrapes steam sales data and generate an infographic based on the data I scraped. I have finished data scraping part including the pictures and is now working on the infographic.
The scraped data looks like this:
[Neon Chrome,-71%,¥ 14,86.79%,4 days, Neon Chrome is a cyberpunk-themed twin-stick shooter video game played from a top-down perspective. The player takes control of remote controlled human clones and is tasked with eliminating the Overseer to stop their oppressive regime on the dystopic society.]
[Crossing Souls,-75%,¥ 12,78.79%,1 day,11 hours ago,1 year ago,1,331690, Crossing Souls is a inventive thrill ride that embraces clever, varied gameplay and heartfelt storytelling to coalesce into a gem of a game.]
Ideally i want to reorganize the texts and generate infographic from scratch like this (some info are just for showcase):
However, I really do not know how to generate such a picture using python or other automated methods. I found several libraries like pillow which have basic apis like import picture, draw lines and texts, but it cannot handle the layout proply since my infographic could have multiple number of cards, and the text's length may varies, so I cannot hardcode the text or image to appear at (x,y) positions.

Related

OpenCV Reflective Surface Problem (Pre-Process Text from Digital Screen)

I'm working on a machine learning application for reading data from fuel pumps, so far I've gone ahead and created a pretty robust YOLOv5 Object Detection Model that can detect the regions that I want fairly accurately. But there is a problem, at certain times of the day there are reflections on the digital screen and I'm unable to use OpenCV pre-process it so that I can extract the numbers from the display.
Check this Video to Understand (YOLOv5 Detection)
https://www.youtube.com/watch?v=3XjZ6Nw70j8
Minimum Reproduceable Example
Cars come and go and their reflection makes it really difficult to differentiate between the reigons for digital-7 font that is used in these displays, you can check out the following repository to understand what I want as s result https://github.com/arturaugusto/display_ocr
Other Solutions I'm Open to:
Since, this application is going to run 24/7 how should I deal with different times,
perhaps create a database of HSV ranges to extract at different times.
Use a polarizing lens would it help in removing the reflections (any user's who have had previous experiences in deploying them).
Edit: I added the correct video ...

Project problem, looking for advice about image processing

I'm a senior in high school and this year I have to do a project for my electronic class, I was hoping to get some advice from people with some experience.
My idea is kind of complicated and has a lot of different sensors but not too crazy, the problem begins with possible image processing. I have a camera who need to check for flashing light and send the video to a screen without the frames of the flashing (like just skipping the frame, so the video is always a frame in delay but the person won't notice it).
The fashing light is supposed to be like in a party or in a video game you get a warning on. The idea is to notice the extreme changing of lighting and to not show it on the screen.
My teacher is afraid that doing image processing might be too complicated and video processing as well... I don't have any knowledge in it, and I have a little background in Python and other languages, do you think it is possible? Can anyone give me an advice or a good video/tutorial to learn from?
Thank you in advance:)
your probleme if quite diificult, cause it envolved unknown environnement in a dynamic time range.
if you admit as an axiom that your camera has for exemple a frame rate of 20 FPS, the chances that your difference between Frame f' and next frame f+1 are quite low.
UNLESS you have a huge color change du to ligth flash,
So you can process with an image similarity such as ssim or psim
https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/
if your image is over a certain treshold that you have to define ( can use also a kalmann filter to dynamically reajust the difference treshold)
so it will probably mean that your flash light is on.
Although it's a visual coding program (per se), Bonsai is a great open source software for doing what's in your description; as well, Bonsai supports applications that require combinations of different hardware (e.g. microcontrollers, cameras) and software components (e.g. Python).
To provide a similar application as an example, I have setup a workflow where Bonsai captures images sent from a Basler camera, it processes the input video frame-by-frame, and when it detects, within the cropped frame (that I cropped around an red LED), a threshold change in pixel intensity (i.e. the red LED turns ON or OFF), it sends an output signal (i.e. 5 volts) to an Arduino microcontroller while saving the image frame as a png file as well as a avi video file along with a vector of True/False (corresponding to the ON or OFF red LED frames) and corresponding timestamps that are saved as csv files, etc. Although this isn't identical to what you've described, I'm sure you can setup a similar Bonsai workflow to accomplish your goal.
Citation: https://www.frontiersin.org/articles/10.3389/fninf.2015.00007/full
Edit: I'm very familiar with Bonsai so if you need help with setting up a Bonsai workflow I'd be happy to help; I don't think there is direct message on StackOverFlow, but given that StackOverFlow doesn't list Bonsai as a programming language (because it's a visual programming language; or because it's not well known enough to include on StackOverFlow) feel free to reach out if you have any questions regarding Bonsai specifically (again, it's also an open source software).

Algorithms for a line follower robot (with camera) capable following very sharp turns and junction

I want to write code (python,opencv) for a line follower robot equipped with a camera and Raspberry Pi. I want to make to robot go has fast as possible
The course has few very sharp turns like this: I'm assuming that using ROI (region of interest) will not work well when the robot in near the turn (it will also capture/"see" the other line) - for example as shown below. What is the best approach here?
In the course there is a junction as shown in below image,
How to "understand" that this is a junction? and if the robot is
coming for the bottom of the image, how to make the algorithm
continue driving straight and not get confused by the horizontal
line?
I can recommend this awesome video on YouTube:
https://www.youtube.com/watch?v=tpwokAPiqfs&t=868s
It contains several episodes and teachs a lot of useful stuff. Good luck!!!

Simple animation in Python using wxPython objects

I'm writing a gravity simulator in Python 2.7 and at the moment I finished the mathematical part. I'm trying to find a way to display the results while the simulation is running, it should consist of some colored circles representing the various bodies and optionally some curved lines to represent orbits, that can be shown or hidden while the simulation is running.
I pictured a way to obtain this result, but I can't seem to find a way to even start.
My idea is to use wxPython. The window should be divided into four sectors (2x2), the first three contain the simulation viewed in the XY, XZ and YZ planes, while the last contains the controls (start/stop simulation, show/hide orbits, ...).
The controls should not be a problem, I just need a way to display the animation. So how can I display moving circles and curved lines using wxPython objects? Which objects should I use? I don't need much more than a couple names, the rest should follow easily.
I know that an animation purely with wxPython will probably require some multithreading, I'm already prepared for that. I also want to stress that I need the animation to be shown while the simulation is running, not after, because the simulation has no definite end at the moment: I don't know when to stop it if I don't see the results first.
If it's somehow useful, I'm using Ubuntu Linux 17.10.
Edit: Since I was asked to choose one approach, I discarded Matplotlib because it requires two different windows. Hope this helps.

Right approach for speeding up presentation of large amount of data?

First question ever on stackoverflow. Been reading for some time now, while trying to learn Python and wxPython.
I'm writing a small app for presenting a large amount of data on the screen in a custom way. The data is stock information stored in objects in Python. Its about 100 stocks that should be presented at the same time on the screen. Every stockobject has 35 attributes, so it makes 3 500 attributes showing at the same time. And I want different fonts, size and colour depending on attribute. The background of each stockobject is changing depending on user (me) input.
So I tried making a interface with wxPython and a lot of StaticText controls. It took 5 seconds to load, timing it with timeit module.
Googling the net gave me an idea to draw the data on a device context instead. That took only 0.1 second. To make the app clickable I draw a second picture into memory with specific colours for each attribute. When clicking on the panel showing the picture I compare the coordinates with the DC in memory to calculating what was clicked. And now I am about to write a sizer routine so the user can change fontsize.
Well my question is quit simple: Do you think I chose the right approach?
Or is there a simpler more pythonic way to do this, without using StaticText that took forever to load?
Grids is not a solution for me, because I want the data to be presented in a very specific layout. To be able to do that with a grid, I would have to set the grid to 2px with and hight, and then merge cells all over the place...
edit:
link for downloading picture of the controll as it looked yesterday:
http://dl.dropbox.com/u/10606669/super.png
Ugly and not the exact way I want it to lok. This because of trying to write my own sizer routine.
You can try freezing the whole frame during the loading process, like this:
frame.Freeze()
try:
# load all data
finally:
frame.Thaw()
In general, though, having that many Window controls will hurt performance, so custom drawing is the only solution. You could simplify things a little by creating your own custom control for one stock (with its own EVT_PAINT handler, etc.) and then creating 100 of them. It should make your DC calculations easier. See Creating Custom Controls for more information.

Categories

Resources