The only additional features I need from StyledTextCtrl are the following:
Change caret width using SetCaretWidth(pixels)
Change caret colour using self.SetCaretForeground(colour)
Change entire background colour to transparent (or alpha). I don't know how to do this.
Change Font (face and size). I don't know this either.
Other than that I want it to behave exactly like a normal TextCtrl. ie. No scrollbars, no multilines etc. A lot of info here, but it is overwhelmingly big! So how much code will I have to write before I shoot myself in the foot?
There's a sample model here, for quick testing.
You can do (4) with a plain wxTextCtrl without any problems, so if you can live with just this, I'd strongly suggest just using the standard control instead. You can make the window transparent but this is not implemented in all ports (notably not in wxMSW) currently. The other two points are extremely unlikely to be ever possible with the standard control as it's really supposed to use the standard caret.
If you really need (1) and (2) you will have to use the non-native wxStyledTextCtrl but then you really should abandon any idea to make it behave exactly like the native control, it won't work.
Related
I am asking a question, because my two week research are started to get me really confused.
I have a bunch of images, from which I want to get the numbers in Runtime (it is needed for reward function in Reinforcment Learning). The thing is, that they are pretty clear for me (I know that it is absolutely different thing for OCR-systems, but that's why I am providing additional images to show what I am talking about)
And I thought that because they are rather clear. So I've tried to use PyTesseract and when it does not worked out I have tried to research which other methods could be useful to me.
... and that's how my search ended here, because two weeks of trying to find out which method would be bestly suited for my problem just raised more questions.
Currently I think that the best resolve for it is to create digit recognizing model from MNIST/SVNH dataset, but is not it a little bit overkill? I mean, images are standardized, they are in Grayscale, they are small, and the numbers font stays the same so I suppose that there is easier way of modyfing those images/using different OCR method.
That is why I am asking for two questions:
Which method should be the most useful for my case, if not model
trained with MNIST/SVNH datasets?
Is there any kind of documentation/books/sources which could make the actual choice of infrastructure easier? I mean, let's say
that in future I will come up again to plan which OCR system to use.
On what basis should I make choice? Is it purely trial and error
thing?
If what you have to recognize are those 7 segment digits, forget about any OCR package.
Use the outline of the window to find the size and position of the digits. Then count the black pixels in seven predefined areas, facing the segments.
This question may be a little different, since I'm pretty much a noob at programming. I've recently started playing a Pokémon game, and I thought of an idea for a cool Python program that would be able to grab a color on a certain pixel to detect if a pokémon is shiny or not.
However, due to my very limited programming experience, I don't know what modules to use and how to use them.
So basically, here's what I want it to do:
Move the cursor to a certain pixel and click.
Detect the color of a certain pixel, and compare that to the desired color.
If it's not desirable, click a button and re-loop until it's desirable.
So, it's pretty obvious that we'll be needing a while loop, but can someone explain how to do the above three things in relatively simple terms? Thanks.
Try breaking down this list into actions and searching for answers to each action.
For example, 1 is performed by the user? So we don't have to program that.
For 2, we need to determine the location of the mouse when clicked and get the color under it.
For 3, compare the RGB values (or whatever) to the desired values for that pokemon. This is complicated because your program needs to figure out which pokemon it is checking against. There are probably pokemon where their regular color is another's shiny. Try breaking down this into even smaller problems :)
No guarantees that these links will be perfect, just trying to show how you need to break down the problem into smaller, workable chunks which you can address either directly in code or by searching for other people who have already solved those smaller problems.
I'm working on a basic Tkinter image viewer. After fiddling with a lot of the code, I have two lines which depending on the operation that triggered the refresh are 50-95% of the execution time.
self.photo = ImageTk.PhotoImage(resized)
self.main_image.config(image=self.photo)
Is there a faster way to display a PIL/Pillow Image in Tkinter?
I would recommend testing the 4000x4000 images you're concerned about (which is twice the resolution of a 4K monitor). Use a Google advanced search to find such images, or use a photo/image editor to tile an image you already have. Since I don't think many people would connect a 4K (or better) monitor to a low-end computer, you could then test the difference between scaling down a large image and simply displaying it, so if most of the work is in resizing the image you don't have to worry as much about that part.
Next, test the individual performance of each of the two lines you posted. You might try implementing some kind of intelligent pre-caching, which many programs do: resize and create the next photo as the user is looking at the current one, then when the user goes to the next image all the program has to do is reconfigure self.main_image. You can see this general strategy at work in the standard Windows Photo Viewer, which responds apparently instantaneously to normal usage, but can have noticeable lag if you browse too quickly or switch your browsing direction.
Also ask your target userbase what sort of machines they're using and how large their images are. Remember that reasonable "recommended minimum specs" are acceptable.
Most importantly, keep in mind that the point of Python is clear, concise, readable code. If you have to throw away those benefits for the sake of performance, you might as well use a language that places more emphasis on performance, like C. Make absolutely sure that these performance improvements are an actual need, rather than just "my program might lag! I have to fix that."
I have load an obj file to render my opengl model using pyopengl and pygame. The 3D model show successfully.
Below is the 3D model i render with obj file, Now i cut my model into ten pieces through y axis , my question is how to get the sectional drawing in each piece?
I'm really very new to openGL, Is there any way can do that?
There are two ways to do this and both use clipping to "slice" the object.
In older versions of OpenGL you can use user clip planes to "isolate" the slices you desire. You probably want to rotate the object before you clip it, but it's unclear from your question. You will need to call glClipPlane() and you will need to enable it using glEnable with the argument GL_CLIP_PLANE0, GL_CLIP_PLANE1, ...
If you don't understand what a plane equation is you will have to read up on that.
In theory you should check to see how many user clip planes exist on your GPU by calling glGetIntegerv with argument GL_MAX_CLIP_PLANES but all GPUs support at least 6.
Since user clip planes are deprecated in modern Core OpenGL you will need to use a shader to get the same effect. See gl_ClipDistance[]
Searching around on Google should get you plenty of examples for either of these.
Sorry not to provide source code but I don't like to post code unless I am 100% sure it works and I don't have the time right now to check it. However I am 100% sure you can easily find some great examples on the internet.
Finally, if you can't make it work with clip planes and some hacks to make the cross sections visible then this may indeed be complicated because creating closed cross sections from an existing model is a hard problem.
You would need to split the object, and then rotate the pieces so that they are seen from the side. (Or move the camera. The two ideas are equivalent. But if you're coding this from scratch, you don't really have the abstraction of a 'camera'.) At that point, you can just render all the slices.
This is complicated to do in raw OpenGL and python, essentially because objects in OpenGL are not solid. I would highly recommend that you slice the object into pieces ahead of time in a modeling program. If you need to drive those operations with scripting, perhaps look into Blender's python scripting system.
Now, to explain why:
When you slice a real-life orange, you expect to get cross sections. You expect to be able to see the flesh of the fruit inside, with all those triangular pieces.
There is nothing inside a standard polygonal 3D model.
Additionally, as the rind of a real orange has thickness, it is possible to view the rind from the side. In contrast, one face of a 3D model is infinitely thin, so when you view it from the side, you will see nothing at all. So if you were to render the slices of this simple model, from the side, each render would be completely blank.
(Well, the bits at the end will have 'caps', like the ends of a loaf a bread, but the middle sections will be totally invisible.)
Without a programming library that has a conception of what a cut is, this will get very complicated, very fast. Simply making the cuts is not enough. You must seal up the holes created by slicing into the original shape, if you want to see the cross-sections. However, filling up the cross sections has to be done intelligently, otherwise you'll wind up with all sorts of weird shading artifacts (fyi: this is caused by n-gons, if you want to go discover more about those issues).
To return to the original statement:
Modeling programs are designed to address problems such as these, so I would suggest you leverage their power if possible. Or at least, you can examine how Blender implements this functionality, as it is open source.
In Blender, you could make these cuts with the knife tool*, and then fill up the holes with the 'make face' command (just hit F). Very simple, even for those who are not great at art. I encourage you to learn a little bit about 3D modeling before doing too much 3D programming. It personally helped me a lot.
*(The loop cut tool may do the job as well, but it's hard to tell without understanding the topology of your model. You probably don't want to get into understanding topology right now, so just use the knife)
Essentially what I would like to do is draw a circuit diagram in a PyQt based on input given from another part of the GUI. My first thought was to simply use graphical tiles, and switch them out as pixmaps based on the input, but that is pretty clunky.
I suppose finding a way to actively display dia diagrams in a frame of the GUI would work as well.
Regardless, how would you recommend going about doing this? Is there as easier way?
Thanks!
Edit: Does anyone have experience with any of the following?
http://code.google.com/p/pydot/
http://code.google.com/p/python-graph/
http://code.google.com/p/yapgvb/
http://live.gnome.org/Dia/Python
To anyone that needs to go down this route in the future, the best option was definitely pydot. Easy to use, and pretty full-featured. Way to go, Graphviz.