I have a lot of PNGs that were tiled with gdal2tiles.py. I have done some processing on these tiles and now I would like combine them back into one large TIF.
For example I have folder 13-20 for different zoom levels, let's say I want all the PNGs from zoom level 20 to be a single mosaic, how would I do with gdal_merge? I'm using gdal_merge now trying this but I end up getting the last .PNG that it processes so my TIF is just a 256x256 TIF of the last processed PNG. Here is my current code,
python gdal_merge.py -o mos.tif -of GTiff -v --optfile tif_list.txt
tif_list.txt contains the list of all my PNGs
I'm assuming I might need to add a -co option but I cannot find any documentation on what I can use in -co. If this is needed my coordinate system is EPSG 3857, and the tiles were generated as mercator. Any help would be appreciated.
Update:
format of tif_list.txt,
C:\Users\Administrator\Desktop\19\195953\226590.png
C:\Users\Administrator\Desktop\19\195954\226581.png
C:\Users\Administrator\Desktop\19\195954\226582.png
C:\Users\Administrator\Desktop\19\195954\226583.png
C:\Users\Administrator\Desktop\19\195954\226584.png
C:\Users\Administrator\Desktop\19\195954\226585.png
C:\Users\Administrator\Desktop\19\195954\226586.png
C:\Users\Administrator\Desktop\19\195954\226587.png
C:\Users\Administrator\Desktop\19\195954\226588.png
C:\Users\Administrator\Desktop\19\195954\226589.png
C:\Users\Administrator\Desktop\19\195954\226590.png
C:\Users\Administrator\Desktop\19\195955\226581.png
C:\Users\Administrator\Desktop\19\195955\226582.png
C:\Users\Administrator\Desktop\19\195955\226583.png
C:\Users\Administrator\Desktop\19\195955\226584.png
C:\Users\Administrator\Desktop\19\195955\226585.png
C:\Users\Administrator\Desktop\19\195955\226586.png
C:\Users\Administrator\Desktop\19\195955\226587.png
C:\Users\Administrator\Desktop\19\195955\226588.png
C:\Users\Administrator\Desktop\19\195955\226589.png
C:\Users\Administrator\Desktop\19\195955\226590.png
examples of the PNGs,
Related
I have a Python script that uses Pillow to resize an image to an Instagram sized image with blurred background from the original image.
Before and after images (both JPGs):
https://app.box.com/s/jpv2mxlncp9871zvx9ygt0be4gf0zc9q
Is this simply a function of the 'after' JPG being too small to reflect all the colors in the original image? (Instagram only allows 2048x2048 images max, my original is a JPG converted from TIF from a 24.2-megapixel RAW image taken from a Nikon DSLR). Perhaps it's all in my head, but in my opinion, the 'after' image has lost some saturation / vibrance (compare the yellow buildings and car lights for example)
Has anyone experienced similar issues? Is there some default mode in Pillow that is reducing the number of colors available? I'm thinking of adding an additional saturation step to my script, but that seems like a hack.
EDIT: Added another before / after pair of images to the link above. I also realize I can easily share the script's source (GitHub repo):
https://github.com/princefishthrower/instagramize
The difference is that the original image contains an "ICC Colour Profile" (amongst others) which are not preserved in the output image.
You can see this most easily with exiftool:
exiftool Mountains_Before.jpg | grep -i profile
Or with ImageMagick:
magick identify -verbose Mountains_Before.jpg | grep -A999 Profiles:
Output
Profiles:
Profile-8bim: 92 bytes
Profile-exif: 17796 bytes
Profile-icc: 560 bytes
Profile-iptc: 80 bytes
City[1,90]: 0x00000000: 254700 -%G
Created Date[2,55]: 2020-7-1
unknown[2,62]: 2020-06-30
unknown[2,63]: 21:11:26+00:00
unknown[2,0]: 4
Created Time[2,60]: 20:22:05-20:22
Profile-xmp: 9701 bytes
If you strip the profiles from the original, you will see it too, is washed out and flatter:
magick Mountains_Before.jpg -strip NoProfile.jpg
You can extract the ICC Profile and look at out like this if that sort of thing excites you:
magick Mountains_Before.jpg profile.icc
If you did that, I guess you could re-attach the profile from the BEFORE image to the AFTER image like this:
magick Mountains_After.jpg -profile profile.icc AfterWithProfile.jpg
Keywords: Image processing, ImageMagick, profile, ICC profile, saturation, saturated, desaturated, washed out.
As Mark Setchell pointed out, it is a matter of preserving the color profile of the image, which is possible natively in Pillow, first by retrieving the profile after opening the image:
image = Image.open('mycoolimage.jpg')
iccProfile = image.info.get('icc_profile')
iccBytes = io.BytesIO(iccProfile)
originalColorProfile = ImageCms.ImageCmsProfile(iccBytes)
and when calling save with Pillow you can pass an icc_profile:
image.save('outputimagename.jpg', icc_profile=originalColorProfile.tobytes())
(Obviously, I am doing other manipulations to image in between these two steps here. Apparently one or more of them cause the icc_profile to disappear.)
This answer was also helpful in building this solution.
I added Mountains_After_NEW.jpg for those interested to see the results of these additions.
I have to use FFmpeg to detect shot changes in a video, an also save the timestamps and scores of the detected shot changes? How can i do this with a single command?
EDIT
I jumped to my use case directly, as it was solved directly using FFmpeg, without the need of raw frames.
The best and perfect solution I came across after reading tonnes of Q/A:
Simply use the command:
ffmpeg inputvideo.mp4 -filter_complex "select='gt(scene,0.3)',metadata=print:file=time.txt" -vsync vfr img%03d.png
This will save just the relevant information in the time.txt file like below:
frame:0 pts:108859 pts_time:1.20954
lavfi.scene_score=0.436456
frame:1 pts:285285 pts_time:3.16983
lavfi.scene_score=0.444537
frame:2 pts:487987 pts_time:5.42208
lavfi.scene_score=0.494256
frame:3 pts:904654 pts_time:10.0517
lavfi.scene_score=0.462327
frame:4 pts:2533781 pts_time:28.1531
lavfi.scene_score=0.460413
frame:5 pts:2668916 pts_time:29.6546
lavfi.scene_score=0.432326
I'm forming an image in SVG from a bitmap using svgwrite and python where each line is rotated by theta around a common origin into a fan like-pattern. Currently this image is running around 10 MB for 128 lines largely because the excessive amount of floating point retained in the 79000+ line segments (one line segment per pixel).
I'd like to get the image size down significantly by drawing a single line from the origin out to the end-point and then stretch one line 'image' over that SVG line. Is this possible using SVG? Beyond that, I'm open to any suggestion that might get the size down significantly and so I later I can animate the lines in place.
How about this solution (see fiddle):
Slice the image up in the number of strips you need. For the example in the linked fiddle, I used ImageMagick as follows to cut up a 128x128 PNG image into 128 vertical strips:
convert image.png -crop 1x128 +repage +adjoin strip-%d.png
Convert all the image strips to data URI format and embed them in the SVG. I used this bash one-liner:
for i in strip-*.png; do echo "data:image/png;base64,$(openssl base64 < $i | tr -d '\n')"; done > data-uris.txt
In the SVG, use the transform() attribute on the image strip elements to rotate them to the required degree.
For a 128x128 PNG icon of 16.4KB, I end up with an SVG file of 128.1KB. Most of that is the base64-encoded image data (the total size of the 128 PNG strips is already 85.1KB). It could be further reduced a little by rounding of some floats, but I don't think there's a whole lot to be gained.
There might be another approach possible where you embed the image as a whole, and reference another clipped section of the same image over and over, but I couldn't get that to work.
I am looking to generate a PDF report from JPEGs on a server. The JPEGs are in folders named after the location they were taken and the JPEGs are named based on the date they were taken (...\Location 1\15 08 03 description.jpg). Basically I need to grab all pictures taken at each site last month, group them evenly on a page (4 max/page), label the pages with location and date, export PDF.
I have written projects in Powershell and Python so it would be a lot easier for me to operate in these languages but I will consider all suggestions.
So far, my idea is to use switch/case to select the various folder names, for loop through to select all cases, and select all files with .jpg extension within a month range (maybe user prompted?). Where I fall flat is arranging the JPEGs into a PDF as I described.
Edit: So if you follow Mark Setchell's advice below, create the images he suggested and place them in C:\New folder. So suppose you had in this sub directory 3 folders (New folder, New folder (2), etc.) and 2 of these contain the nine colored JPEGS and the third is empty:
clear
$path="C:\New folder\"
$array=#()
$name="file*.jpg"
foreach ($i in Get-ChildItem -path $path -Filter "New*")
{$i0=$path+$i; Get-ChildItem -path $i0 -Filter $name | ForEach-Object {$array+= $i0+"\"+$_.name}
montage $array -tile 2x2 -geometry +5+5 -title $i -page letter montage.pdf}`
My code overwrites the title on all pages with that of the 3rd empty folder. Also, it begins adding JPEGS from the next folder into the previous page, which should be titled as the previous folder and only contain those JPEGS.
Imagine you have 9 JPEG files in a directory, called file1.jpg...file9.jpg and they were created like this as lumps of red, green, blue, cyan, magenta, yellow, black and gray.
convert -size 300x400 xc:red file1.jpg
convert -size 300x400 xc:lime file2.jpg
convert -size 300x400 xc:blue file3.jpg
convert -size 300x400 xc:cyan file4.jpg
convert -size 300x400 xc:magenta file5.jpg
convert -size 300x400 xc:yellow file6.jpg
convert -size 300x400 xc:black file7.jpg
convert -size 300x400 xc:gray40 file8.jpg
convert -size 300x400 xc:gray80 file9.jpg
If you now go into that directory and run the following bash script, it will montage the files into pages of A4 with 4 images on each page.
#!/bin/bash
for f in file*jpg; do
convert -label "$f" "$f" -depth 8 miff:-
done | montage -tile 2x2 -geometry +5+5 miff:- -page A4 montage.pdf
The crux of the matter is firstly adding a label to each image based on the filename and secondly sending the label and the image to a MIFF file which is capable of holding many images. The combined group of images are then fed into montage whch arranges them four to a page because of the -tile 2x2. The geometry sets the spacing between the pictures - bigger numbers mean bigger spaces. Finally we tell montage that the paper size is A4 and we want a PDF of all the input images - please!
Of course you can diddle with the background, the sizing, the spacing and the labelling till you are happy - but this should give you the basic idea.
You will get out a PDF called montage.pdf with these three pages:
Page1
Page 2
Page 3
Its shoud be fairly trivial to convert the loop to an ugly Windows-y FOR loop - for loop help.
for converting your jpg's into pdf files you can use ImageMagick.
There are also different types of python APIs for imagemagick, but in your case is the best to write a simple powershell script and execute imagemagick directly.
Just use the following imagemagick command:
convert <yourfile.jpg> <newfile.pdf>
Note: You must use file extentions in your command. Otherwise imagemagick don't know what todo.
You could use glob to get the files (and then sort them if needed). I think the switch could get way too complicated.
As for the convert part: Shibumi's answer.
Most of my code takes a .fits file and creates small thumbnail images that are based upon certain parameters (they're images of galaxies, and all this is extraneous information . . .)
Anyways, I managed to figure out a way to save the images as a .pdf, but I don't know how to save them as .fits files instead. The solution needs to be something within the "for" loop, so that it can just save the files en masse, because there are way too many thumbnails to iterate through one by one.
The last two lines are the most relevant ones.
for i in range(0,len(ra_new)):
ra_new2=cat['ra'][z&lmass&ra&dec][i]
dec_new2=cat['dec'][z&lmass&ra&dec][i]
target_pixel_x = ((ra_new2-ra_ref)/(pixel_size_x))+reference_pixel_x
target_pixel_y = ((dec_new2-dec_ref)/(pixel_size_y))+reference_pixel_y
value=img[target_pixel_x,target_pixel_y]>0
ra_new3=cat['ra'][z&lmass&ra&dec&value][i]
dec_new_3=cat['dec'][z&lmass&ra&dec&value][i]
new_target_pixel_x = ((ra_new3-ra_ref)/(pixel_size_x))+reference_pixel_x
new_target_pixel_y = ((dec_new3-dec_ref)/(pixel_size_y))+reference_pixel_y
fig = plt.figure(figsize=(5.,5.))
plt.imshow(img[new_target_pixel_x-200:new_target_pixel_x+200, new_target_pixel_y-200:new_target_pixel_y+200], vmin=-0.01, vmax=0.1, cmap='Greys')
fig.savefig(image+"PHOTO"+str(i)+'.pdf')
Any ideas SO?
For converting FITS images to thumbnails, I recommend using the mJPEG tool from the "Montage" software package, available here: http://montage.ipac.caltech.edu/docs/mJPEG.html
For example, to convert a directory of FITS images to JPEG files, and then resize them to thumbnails, I would use a shell script like this:
#!/bin/bash
for FILE in `ls /path/to/images/*.fits`; do
mJPEG -gray $FILE 5% 90% log -out $FILE.jpg
convert $FILE.jpg -resize 64x64 $FILE.thumbnail.jpg
done
You can, of course, call these commands from Python instead of a shell script.
As noted in a comment, the astropy package (if not yet installed) will be useful:
http://astropy.readthedocs.org. You can import the required module at the beginning.
from astropy.io import fits
At the last line, you can save a thumbnail FITS file.
thumb = img[new_target_pixel_x-200:new_target_pixel_x+200,
new_target_pixel_y-200:new_target_pixel_y+200]
fits.writeto(image+str(i).zfill(3)+'.fits',thumb)