I've done a simple image processing using openCV that has a RGB-Array as the output.
is it possible to transfer and display my RGB-Array to a HTML without initially saving it as .jpg or any other picture format?
I can't use CGI, since I need to display this picture in an application, that only allows a HTML Code.
You could just draw them to an HTML canvas object.
Drawing to canvas requires the use of JavaScript, but you could easily put your drawing function in the onload event of the body element. This StackOverflow thread talks more about how to draw individual pixels to a canvas object. They built a function for drawing individual pixels, and then demonstrate how to use it to draw a simple image. If you just build a script that takes the data in some hidden html element, and then have your Python script manipulate that hidden element, you'd only need to write your code once.
Related
How do I add circle-clipped image glyphs to my chart, without processing and uploading the images manually beforehand? I'm open to using other modules.
I want the end result to look something like this chart (from nytimes).
http://imgur.com/a/Nv6ta
My current understanding is that we can only load images directly from urls, which is not my desired outcome.
http://docs.bokeh.org/en/latest/docs/reference/models/glyphs/image_url.html
My current understanding is that we can only load images directly from urls
This is not correct, there is also ImageRGBA which allows for sending images as raw RGBA data, directly embedded in the Bokeh document. See, e.g., this gallery example:
http://docs.bokeh.org/en/latest/docs/gallery/image_rgba.html
So assuming that images is a Python list of 2D NumPy arrays of RGBA data for the (pre-cropped) images you want to display, then Bokeh could show them with:
p.image_rgba(image=images, x=....)
Of course, you have to convert the images to RGBA arrays yourself, and also crop them, so things may simply be easier or more ready made for this use-case with another tool.
I have a Van der Waal's gas simulation where we are showing real time collisions between gas molecules. I am doing that using Pygame and it works fine. However, I want one half of the Pygame window to show the real time collisions while the other half plots a dynamic histogram for every time step. Till now, I haven't come across any such code which allows plotting in the same Pygame window where some other simulation is going on.
Using custom ipywidget to display incremental simulation image display
I have been using jupyter technology for all my needs to displaying results from simulations, and I have had reasonable succeesses. Here is what I would do in your case.
I am not an expert at the capabilities of pygame. Looks like in each loop, you are running a simulation step, get the simulation state, and feed it to the pygame scene building to update and render the state for that step (loop).
Assume that, MyPyGameRenderer is your python class object whose MyPyGameRenderer.produce_rendering( simul_state=None) is the one that cause the rendering. I would alter this routine as follows
class MyPyGameRenderer(object)
def produce_rendering_as_a_png_img(self, simul_state=None):
myRenderPngByteData=None
#using pygame, interpret the simulation state and produce rendering
#as a png image byte data and set it to myRenderPngByteData
return myRenderPngByteData
I would install jupyter, make a notebook with python2.7 kernel, and put the above code, and the code for simulation there, and show my png image produced.
If what I said above is what you are doing and your objective is to arrange GUI elements, (in your case it is displaying two windows side by side), I would be tempted to use ipywidgets. ipywidgets enable you in some rudimentary way to assemble GUI elements. The GUI composition capabilities of ipywidgets are cruder than a sophisticated javascript renderings of GUI elements that you see in a webpage, but in my experience they are sufficient for demonstrating scientific results. Here are the steps that I would do.
In achieving what you are saying, for each simulation step, I would still use pygame to do the scene compositing and rendering.
I would construct a custom ipywidget called MyCustomIncrementalImageWidget with a value element which is supposed to hold a png as a dataurl image to display. The rendering routine of the custom ipywidget, just displays the png image which is set as a dataurl. I can display any random png image in my MyCustomIncrementalImageWidget as follows
import base64
#read a random png image and set its byte-data here
my_random_png_img_byte_data = None
myImage = MyCustomIncrementalImageWidget()
dataurl = "data:image/png;base64" + base64.b64encode(my_random_png_img_byte_data
myImage.value = dataurl
display(myImage)
Now the above code will show your png image in the ipywidget framework. You can have the following advantages.
You can compose this myImage with other extensive GUI elements available from ipywidget framework and construct more complex GUI here. In your case, I would have two instances of MyCustomIncrementalImageWidget-es one for holding the rendering of the scene and the other for holding the histogram.
You can compose these two custom ipywidet-s using widget.HBox, which should render them horizontally side by side.
You need only one call for the display of your GUI elements. And this can be outside your simulation loop. In short, the GUI display is detached from your simulation code, All you need to do in your simulation step is to update the value element of each of the two custom widgets in your simulation step.
Putting it together, here is my high level code.
import b64encode
from ipywidgets import widgets
class MyCustomIncrementalImageWidget(widgets.DOMWidget):
#your implementation of custom ipywidget
pass
def get_dataurl_from_imagedata(img_png_byte_data=None):
dataurl = "data:image/png;base64" + base64.b64encode(img_png_byte_data)
return dataurl
class MyPyGameRenderer(object)
def produce_rendering_as_a_png_img(self, simul_state=None):
myRenderPngByteData=None
#using pygame, interpret the simulation state and produce rendering
#as a png image and set the byte-data to myRenderPngByteData
return myRenderPngByteData
def produce_histogram_as_a_png_img(self, simul_state=None):
myHistPngByteData=None
#interpret the simulation data and produce the histigram as a png image
# set the image byte-data to myHistPngByteData
return myHistPngByteData
def update(self, simul_state=None):
r=self.produce_rendering_as_a_png_img(simul_state=simul_state)
h=self.produce_histogram_as_a_png_img(simul_state=simul_state)
return r, h
my_renderer=MyPyGameRenderer()
my_collision_render_widget=MyCustomIncrementalImageWidget()
my_histogram_widget=MyCustomIncrementalImageWidget()
top_level_hbox=widgets.HBox(children=tuple([my_collision_render_widget,
my_histogram_widget]))
My simulation code will be as follows
whle(True):
simul_state=None
#do your simulation and get simulation state and set it to simul_state
r, h= my_pygame_renderer.update(simul_state=simul_state)
my_collision_render_widget.value = get_dataurl_from_imagedata(r)
my_histogram_widget.value=get_daturl_from_imagedata(h)
And wherever I want to display this widget combination, I would do.
from IPython import display
display(top_level_hbox)
Please take a look at my custom ipywidget implementation here. My class is called ProgressImageWidget.
I have an animated gif that I want to loop n times (animation.gif). After the loop, I want to show a static image (static.gif). The animation will be displayed over the web, so the file size needs to be a small as possible.
I tried implementing it with Imagemagick by adding the static image with a zero delay ...
convert -loop 3 animated.gif -delay 0 static.gif newanim.gif
Although the static image is shown in the end, the problem is that after every iteration static.gif is shown for a split second.
Another thing I tried was to add the animation 3 times and the static image at the end. This works perfectly but the file becomes too large especially if the animation is long, and if it is looped many times. For instance, a 6.1mb animation becomes ~18mb.
convert -loop 1 animated.gif animated.gif animated.gif static.gif newanim.gif
I'm using Python in a linux environment to implement this, so if there are programmatic ways of doing this instead of Imagemagick that would work as well.
EDIT: I failed to mention a constraint: it needs to work without any client side programming (Javascript, CSS, etc). It needs to be a purely gif solution. This constraint makes it different from How to stop an animated gif from looping
No, you can't. The GIF anim format doesn't provide that ability.
There are various ways to do what you want on a Web page, but you'll need to have a separate file for the static image, and you'll need some code (eg JavaScript, or maybe CSS) to display the static image after the desired number of anim loops.
I'm pretty sure that your problem with resulting gif size is in tools you are using. I've created those samples, one with animation and another with animation repeated 2 times and got the same size for both. Check it yourself:
I've used ScreenToGif, it is super buggy and only works on Windows but does its job and can open existing gif or list of images for editing.
If you need solution for Linux take a look at FFmpeg, but I didn't used it myself.
I am currently trying to make a simple image viewers that lets me view images or gifs. I've done most of the work and it's coming out how I'd like it, but I'd also like to be able to view gifs that are too big for my screen. I have a normal 1080p monitor, but the images are 1900x1300, which I understand is an odd image size for gifs. Problem is, pyglet has no obvious way to scale down gifs when drawing them. I'm using sprites, but scaling down the sprite merely returns an error, as the image itself hasn't actually changed, and is still 1900x1300.
What I need is a method to take a gif file, scale it down by 1/2, and render that gif using pyglet. I could probably use other libraries, but I'm trying to keep the project small.
Thanks!
I am using a thirdparty library that utilizes a circular buffer for image data and video. It has a method for getting the last image or popping it. I was wondering what would be the best way to implement video functionality in pyqt for this. Is there some video widget with a callback function that I could use? Or do I have to somehow implement parallel processing on my own? Parallel to this, suggestions on how this would be implemented in qt if you dont know how to implement it in pyqt would also be very much appreciated.
Thanks in advance!
I would pop the last image (from the circular buffer) and load it into a QPixmap. This allows you to put the image into a form that a pyqt4 gui would be able to load.
Depending on your libraries image format (straight forward bmp? jpg? png? raw pixels?), you can load the data into the QPixmap in one of two ways.
First: do it pixel by pixel (set the width and height of the image and copy each pixels value over one by one). This method is slow and I'd only resort to it if necessary.
Second: if the image is being stored in a common format (ones that are 'supported' are listed here), this becomes trivial.
Then after the image is loaded in the QPixmap, I would use QLabel.setPixmap() to display the image.
Do this with a QTimer slot at a certain rate and you'll be able to display your images in a pyqt4 gui.