I'm trying to adequate the FontSize of my text into a width-height specific context. For instance, if I got a image (512x512 pixels) and I got for instance 140 characters. What would be the ideal FontSize?
In the above case, a 50 pixels Fontsize seems to be ok but what happened if there's a lot more text? the text will not fit into the picture so it needs reduction. I've been trying to calculate this on my own without success.
What I've tried is this:
get total pixels, with a picture 512x512 = 262144 and divide into the length of the text. But that gives a big number. Even if I divided that number by 4 (thinking about a box-pixel-model for the font).
Do you have any solutions for this?
PS. I've been using truetype (if this is somehow useful)
I've been using python for this purpose, and PIL for image manipulation.
Thank you in advance.
It's pretty tricky to do analytically.
One way is trial and error. Choose a large font size and render the layout to see whether it fits
Use bisection algorithm to converge on the largest font that fits
Related
I am trying to use PIL to precompute the size that a given line of text will take at a given font and size. PIL seemed to be more or less the only working solution.
I am not sure what is the unit of the returned value of font.textsize(..). The doc doesn't specify it.
The reason why I am asking is because I am confused by the returned values as mentioned here: ImageFont.textsize() seems wrong
The units are pixels, so it tells you how large a canvas you would need to accommodate the text.
There is no reason it should be the same as any other product's text size.
There is occasionally a small discrepancy of 3-4 pixels, probably as a result of anti-aliasing.
my program's purpose is to take 2 images and decide how similar they are.
im not talking here about identical, but similarity. for example, if i take 2 screenshots of 2 different pages of the same website, their theme colors would probably be very similar and therefor i want the program to declare that they are similar.
my problem starts when both images have a white background that pretty much takes over the histogram calculation (over then 30% of the image is white and the rest is distributed).
in that case, the cv2.compareHist (using correlation method, which works for the other cases) gives very bad results, that is, the grade is very high even though they look very different.
i have thought about taking the white (255) off the histogram before comparing, but that requires me to calculate the histogram with 256 bins, which is not good when i want to check similarity (i thought that using 32 or 64 bins would be best)
unfortunately i cant add images im working with due to legal reasons
if anyone can help with an idea, or code that solves it i would be very grateful
thank you very much
You can remove the white color, rebin the histogra and then compare:
Compute a histrogram with 256 bins.
Remove the white bin (or make it zero).
Regroup the bins to have 64 bins by adding the values of 4 consecutive bins.
Perform the compareHist().
This would work for any "predominant color". To generalize, you can do the following:
Compare full histrograms. If they are different, then finish.
If they are similar, look for the predominant color (with a 256-bin histogram), and perform the procedure described above, to remove the predominant color from the comparisson.
I am using the arcgisimage API to have map layers to my scatterplot.
However, the documentation for the API, found here http://basemaptutorial.readthedocs.org/en/latest/backgrounds.html is not that good, especially concerning sizing of the images:
xpixels actually sets the zoom of the image. A bigger number will ask
a bigger image, so the image will have more detail. So when the zoom
is bigger, the xsize must be bigger to maintain the resolution
dpi is the image resolution at the output device. Changing its value
will change the number of pixels, but not the zoom level
THe xsize mentioned is not defined anywhere, and doubling the DPI between 300 and 600 doesn't affect the size of the image.
Anyone have a better documentation/tutorial?
I am learning about some similar things...And I am new to it. So what I can do is to offer some simple ideas in my mind. Wish this will help you.(Although it seems not likely to do so. ^_^ )
The following code is from the given example of tutorial by adding some adjustments. (LA centered)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
map = Basemap(llcrnrlon=-118.5,llcrnrlat=33.15,urcrnrlon=-117.15,urcrnrlat=34.5, epsg=4269)
#http://server.arcgisonline.com/arcgis/rest/services
#EPSG Number of America is 4269
map.arcgisimage(service='World_Physical_Map', xpixels = 1500, verbose= True)
plt.show()
Firstly, I guess here "xsize" equals "xpixels", or just should be "size" (a misspelling? I'm not sure of that). As told in the tutorial, what "xpixel" influence is the resolution of the final figure and the size of it.
When xpixels=150, you'll get the following picture (about 206KB):
However when xpixels=1500, you'll get a picture with higher resolution (about 278KB). Also when you zoom in to figure out the detail, this picture is clearer than the former one.
If you want to see the picture with bigger zoom, you need to set "xpixels" to a larger value to keep it clear.(That is to maintain the resolution. I guess they gave out just a few simple explaination without many details.) And I have no idea what "dpi" is used for...It is like first time you cut a cake into 300 grids, then the second time you cut it into 600 grids. The figure won't be clearer. And from your saying I know that neither will it become a bigger graph.
I don't know if there is an official name for this.
I am trying to do some color analysis on a picture using Python on the pixel-level, but the problem is that sometimes there are little bits of pixels here and there that might have wildly different colors and mess up the analysis.
Is there a way to "smooth the picture out" so these little irregularities go away and the colors are more uniformly distributed within their respective regions?
Check out MedianFilter in the ImageFilter module.
corrected_image = original_image.filter(ImageFilter.MedianFilter(7))
You'll probably want to adjust the filter size. (I've set it to 7 here.)
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Image comparison algorithm
So basically i need to write a program that checks whether 2 images are the same or not. Consider the following 2 images:
http://i221.photobucket.com/albums/dd298/ramdeen32/starry_night.jpg
http://i221.photobucket.com/albums/dd298/ramdeen32/starry_night2.jpg
Well they are both the same images but how do i check to see if these images are the same. I am only limited to the media functions. All i can think of right now is the width height scaling and compare the RGB for each pixel but wouldnt the color be different?
Im completely lost on this one, any help is appreciated.
*Note this has to be in python and use the (media library)
Wow - that is a massive question, and one that has a vast number of possible solutions. I'm afraid I'm not a python expert, but I thought your question was interesting - so I wanted to propose a method that I would implement if I were posed with this problem.
Obviously, the two images you posted are actually very different - so you will need to consider 'how much different is the same', especially when working with images and considering different image formats and compression etc.
Anyway, for a solution that allows for a given difference in colour values (but not for pixels to be in the wrong places), I would do something like the following;
Pick two images.
Rescale the largest image to the exact same height and width as the first (even distorting the image if necessary).
Possibly grayscale the images to make the next steps simpler, without losing much in the way of effectiveness. Actually, possibly running edge detection here could work too.
Go through each pixel in both images and store the difference in either each of the RGB channels, or just the difference in grayscale intensity. You would end up with an array the size of the image noting the difference between the pixel intensities on the two images.
Now, I don't know the exact values, but you would probably then find that if you iterate over the array you could see whether the difference between each pixel in the two images is the same (or nearly the same) across all of the pixels. Perhaps iterate over the array once to find the average difference between the pixel intensities in the two images, then iterate over the image again to see if 90% of the differences fall within a certain threshold (5% difference?).
Just an idea. Of course, there might be some nice functions that I'm not aware of to make this easy, but I wouldn't hold my breath!
ImageMagick has Python bindings and a comparison function. It should do most of the work for you, but I've never used it in Python.
I think step 2 of John Wordsworths answer may be one of the hardest - here you are dealing with a stretched copy of the image but do you also allow rotated, cropped or in other ways distorted images? If so you are going to need a feature matching algorithm, such as used in Hugin or other panorama creation software. This will find matching features, distort to fit and then you can do the other stages of comparing. Ideally you want to recognise Van Gogh's painting from photos, even photos on mugs! It's easy for a human to do this, for a computer it needs rather more complex maths.