GDAL Warp produces a black image - python

I am calling GDAL warp using the python distribution on a NITF file and it simply outputs all zero values which creates an empty black image. The command I'm calling is
import osgeo.gdal as gdal
gdal.Warp("out.ntf", "inp.ntf")
I've tried using Translate as sort of a test to make sure GDAL as a whole is functioning and it seems to output properly. The image data is all correct and displays as expected. Any thoughts as to what could be going wrong?

One thing that's important is to close the Dataset, depending a little on how you run it (script, repl, notebook etc).
This Python interface to the command-line utilities returns an opened Dataset, so you can explicitly close it with.
import osgeo.gdal as gdal
ds = gdal.Warp("out.ntf", "inp.ntf")
ds = None
That will cause for example anything in the GDAL-cache to be properly flushed to disk.

Related

Save PSD to JPG without invisible layers

I would like to export a PSD file for which we put some group layers as invisible to JPG.
Currently, I tried to put the concerned group layers as invisible (by looping through the PSD layers and put the concerned group as invisible like group.visible = False) and then save that PSD.
The saved new PSD does have the concerned group layers invisible.
Later, the new PSD is converted to JPG.
However, the JPG output shows also the invisible layers.
The python code used for passing from the new saved PSD to JPG is like the same for saving (we used psd_tools).
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.save(outputPath, "JPEG")
I have also tried with the command line "convert" on linux but it has also showed the invisible layers after conversion.
So, my question is to know whether there's a way to either remove invisible layers before saving to JPG within the same script without calling a script inside PhotoShop (which requires to open instances of PhotoShop) or export it to JPG without removing with a python code or maybe a command line.
I have found the last days something that does the trick from this StackOverflow Post by adding composite(force=True)
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.composite(force=True).save(outputPath) #outputPath is expected to be a JPG file
This is pretty good for light files. However, when the PSD's size is very large like 1GB, it takes too much time.
As I would like to execute this operation on a daily basis on about > 1000 files, this would take days to be done.
So, I am always looking for another solution.
Here's a sample to a lighter file, unfortunately, I could not put the real one for professional reasons.
https://file.io/yxdDxlzMeMsA

OpenCV imread() Method returns None

I was playing around with the OpenCV library, and tried to open a small image, with the following code:
import cv2
img = cv2.imread(r'penguin.jpeg')
print(img)
To basically take a look at the array of pixels in the image, however the print simply returns None.
Both my .py file and the image are in my desktop, so I believe the problem is not the path.
I am also aware of some issues with imread() and JPEG images, however I get the same result with the PNG version of this image.
This had been working fine up until today, so I am kinda clueless.
Can anyone tell me what might be happening or what I might be doing less correctly?
Thank you so much in advance!
Consulting the OpenCV-Python Tutorials we are warned that
Even if the image path is wrong, it won’t throw any error, but print img will give you None
where print img is the Python 2 analog of your print(img).
The code you have written is correct and having replicated your problem, I print an array representation of a test penguin.jpeg image locally.
As commented by Rashid Ladjouzi, the path is probably incorrect especially given that you mention the script worked previously. I would test this with the following code which should return True:
import os
print(r'penguin.jpeg' in os.listdir("."))

Why is skimage.io.ImageCollection returning every other image in a folder as an empty tuple?

I have followed the advanced ski-image panoramic image tutorial (1) and gotten it to work properly. However, trying to use UAV images (Adobe Buttes Flight 1 Raw) downloaded from (2), results in an interesting problem when using the skimage ImageCollection. Printing the problem images gives me some sort of wrapper:
"PIL.MpoImagePlugin.MpoImageFile image mode=RGB size=4000x3000 at 0x1EB09B86D30"
If I use io.imread to read a problem image it appears to work, printing a shape of "(2,)". However, attempting to print the individually read image gives a type error:
"unorderable types: int() >= MpoImageFile()"
followed by a system error:
"method-wrapper 'le' of MpoImageFile object at 0x000001EB09F20CC0 returned a result with an error set"
I'm really at a loss here. I'm relatively new to python and don't understand why the programs not working. The images are a bit large (5.65 mb) but my main program handles (slowly) the 'good' images.
I've tried the following solutions to no avail:
1) uninstalling pillow, installing both libjpeg & libz (the names seem to have changed) then reinstalling pillow github.com/scikit-image/scikit-image/issues/2000.
2) I'm not using a GPU, parallel processing, or tensorflow github.com/scikit-image/scikit-image/issues/2000,
3) I've also made sure my Anaconda version is fully updated
This is the minimalist example I made to demonstrate the issue. It should run in any jupyter notebook.
import numpy as np
from skimage.color import rgb2gray
from skimage import io
imgs = io.ImageCollection('test\*')
"""
print(imgs[0]) # Looks okay, numpy array
print(imgs[1]) # wrapper
print(imgs[2]) # Looks okay, numpy array
"""
for i in range(6):
print(np.shape(imgs[i]))
individual = io.imread('test\DJI_0002.jpg')
print(np.shape(individual))
#print(individual)
After some more testing this issue goes away when the images are resized at 50%. Is there a limit to the image size scimage can read? This is still not an acceptable solution, I would greatly prefer not having to resize all images that get pieced together.

PNG to PGM conversion without quality loss

So, I have a PNG image file like the following example, and I need it to be converted into PGM format.
I'm using Ubuntu and Python, so any of terminal or Python tools would suit just fine. And there sure is a plenty of ways to do this: using ImageMagick convert command or pngtopam package or Python PIL library, etc.
But the point is, the quality of the image is essential in my case, and all of those failed in keeping it, always ending up with:
No need to mention this is totally not what I want to see. And the interesting thing is that when I tried to convert the same image into PGM manually using GIMP, it turned out quite well, looking exactly the way I'd like it to, i.e. the same as the PNG one.
So, that means it is possible to get a PGM image in fine quality after all, and now I'd really appreciate if someone can tell me how do I do that using terminal/Python tools. I guess, there should be some ImageMagick option that does the trick, it's just that I'm not aware of any.
You lost the antialiasing, which is conveyed via the alpha channel. To preserve it, use:
convert in.png -flatten out.pgm
Without -flatten, convert simply deletes the alpha channel; with -flatten it composites the input image against the background color, which is white by default.
Here are the results, magnified 10x so you can see what's going on:
Not flattened:
Flattened:

Python OpenCv gives error 'cv2.cv.cvseq' object has no attribute 'total'

I was looking for some image edge detection code in Python on the web and found some interesting stuff that I wanted to take a look at. Unfortunately I keep getting this error: 'cv2.cv.cvseq' object has no attribute 'total'
The line of code at fault is
lines = HoughLines2( dst, storage, CV_HOUGH_STANDARD, 1, CV_PI/180, 100, 0, 0 );
The whole code has the option to toggle between Hough Standard and Hough Probabilistic, when I set it to use the probabilistic approach (and thus not requiring "lines.total" piece of code) it runs fine, so I'm fairly certain I have everything I need installed and imported.
I don't know why you use old 'cv' version, while new 'cv2' version is quite simple and all objects are returned either as python list or numpy array, which is easy to handle from user point of view.
Output of HoughLines functions are numpy array of shapes (1,number of lines,2) and (1,number of lines,4). You can do whatever you want since you have all numpy functions at your hand.
Here is a sample for detecting lines, which is same as you mentioned, ie toggling between hough standard and hough probabilistic: houghlines.py
Below are the results i obtained using that code :
Hough Standard :
Hough Probabilistic :
Of course the line detected depends on the parameter values you try. So change parameter values as you like and try.
They have discontinued cvseq in cv2. There's no module cv2.cv.cvseq in opencv 2.3.1
You should use
lines = cv2.HoughLines(dst, 1, CV_PI/180, 100, 0)
http://opencv.itseez.com/modules/imgproc/doc/feature_detection.html?highlight=houghlines#cv2.HoughLines
cv2 library is much more user friendly, fast and effective. You should move on to OpenCV 2.3.1 or 2.4.0. If you have any problems installing OpenCV 2.3.1 - http://jayrambhia.wordpress.com/2012/05/02/install-opencv-2-3-1-and-simplecv-in-ubuntu-12-04-precise-pangolin-arch-linux/

Categories

Resources