I am using the python API of openslide packages to read some ndpi file.When I use the read_region function, sometimes it return a odd image. What problems could have happend?
I have tried to read the full image, and it will be worked well. Therefore, I think there is no problem with the original file.
from openslide import OpenSlide
import cv2
import numpy as np
slide = OpenSlide('/Users/xiaoying/django/ndpi-rest-api/slide/read/21814102D-PAS - 2018-05-28 17.18.24.ndpi')
image = slide.read_region((1, 0),6, (780, 960))
image.save('image1.png')
The output is strange output
As the read_region documentation says, the x and y parameters are always in the coordinate space of level 0. For the behavior you want, you'll need to multiply those parameters by the downsample of the level you're reading.
This appears to be a version-realted bug, see also
https://github.com/openslide/openslide/issues/291#issuecomment-722935212
The problem seems to relate to libpixman verions 0.38.x . There is a Workaround section written by GunnarFarneback suggesting to load a different version first e.g.
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libpixman-1.so.0.34.0
upadte easier solution is:
We are using Python 3.6.8+ and this did the trick for us: conda install pixman=0.36.0
Related
I am processing an audio file with librosa as:
import librosa
import soundfile as sf
y,sr = librosa.cora.load('test.wav', sr=22050)
y_processed = some_processing(y)
sf.write('test_processed.wav', y_processed , sr)
y_read = librosa.cora.load('test_processed.wav', sr=22050)
Now the issue is that y_processed and y_read do not match. My understanding is that this comes from some encoding done by soundfile library. Why is this happening and how can I get from y_processed to y_read without saving?
According to this article, librosa.load(), along with other things, normalizes the bit depth between -1 and 1.
I experienced the same problem as you did, where the min and max values of the "loaded" signal were much closer to each other.
Since I don't exactly how your data differs from each other, this may not help you, but this has helped me.
y_processed_buf = librosa.util.buf_to_float(y_processed)
This seems to be the culprit, which would normalizes your values (source code). It is also called during librosa.load(), which is how I stumbled over it.
I'm trying to get the coordinates of a number of photos, i.e. I'm trying to get the exif data using a python script. The goal is to georeference all the photos and display their locations on a map. I am encountering problems with exif, however. I'm on Windows (64bit) and installed the corresponding (Strawberry) Perl software and then the Exiftool module (version 12.30) using Anaconda (Navigator), but to no avail. It gives me the following error: ModuleNotFoundError: No module named 'exif'. If I use the command pip install exif it tells me that the requirements are already met. What am I missing here? I'll gladly provide more information if required.
... I also tried an alternative: the module exifread works without import problems but does not seem to have all the necessary functionality (I can read the coordinates, but can't handle the extraction of the coordinates, it gives me a IfdTag-object when I would like an array of the degrees, minutes and seconds that I can then further process.)
There is a utility function exifread.utils.get_gps_coord() that provides convenient method to access coordinates as tuple in the format (latitude, longitude). Note negative value for latitude means South, negative value for longitude - West
example
import exifread
path = 'image.jpg'
with open(path, 'rb') as f:
tags = exifread.process_file(f, details=False)
coord = exifread.utils.get_gps_coords(tags)
print(coord)
For sake of completeness, there are also other modules to work with exif:
Pillow - there is functionality to work with exif
piexif
Also, as mentioned in the comments - you can use ExifTool (Perl software), via subprocess
I'm trying to access a png file to add to the background of my matplotlib plot. I'm doing something like this:
fn = get_sample_data('Jupiterbackground.png', asfileobj=False)
img = read_png(fn)
but I'm receiving an error like this:
TypeError: Object does not appear to be a file-like object.
So I manually typed in the path file name to this png file to see if it would work, but it still didn't work, so I'm assuming there's something wrong with the type of file I've chosen. Or am I using a flawed method?
Please include your imports next time, I assume it is
from matplotlib._png import read_png
from matplotlib.cbook import get_sample_data
fn = get_sample_data('Jupiterbackground.png', asfileobj=False)
img = read_png(fn)
fn is a string because you used asfileobj=False (you can check this with print(fn, type(fn)) which is often a good way to find TypeErrors) and read_png expects a file-object. You can either use asfileobj=True or call open on the string you get from get_sample_data.
I can not get read_png to work though and get "OSError: read past end of file". But the method is undocumented (as far as I could google) and its module name starts with an underscore, which in python conventions means it is not part of the public API (that is: it is a function for internal use by matplotlib)
Like the other answer said, use a different function to accomplish your task.
I tried that too. I got the same type of error. But exactly the same thing can be done with:
import matplotlib.image as mpimg\
img = mpimg.imread('Jupiterbackground.png')
and this gives the image array a numpy array. Then if you want you can even change the datatype with img.astype(#dataType).
I had a similar problem and I solved it by writing the whole path of the image file I want to open, like: C\\...\\Jupiterbackground.png.
I don't know why but if I didn't do it, I saw that the program searches for the file where the 'matplotlib' installation files are.
I'm just new to python and I can't seem to find a solution to my problem, since it seems to be pretty simple. I have a geometry on paraview, I'm saving it as a vtk file and I'm trying to use python to calculate it's volume.
This is the code I'm using:
import vtk
reader = vtk.vtkPolyDataReader()
reader.SetFileName("C:\Users\Pauuu\Google Drive\2016-01\SURF\Sim Vascular\Modelos\apoE183 Day 14 3D\AAA.vtk")
reader.Update()
polydata = reader.GetOutput()
Mass = vtk.vtkMassProperties()
Mass.SetInputConnection(polydata.GetOutput())
Mass.Update()
print "Volume = ", Mass.GetVolume()
print "Surface = ", Mass.GetSurfaceArea()
I think there might be a problem with the way im loding the data, and i get the AttributeError: GetOutput.
Do you know what might be happening or what I'm doing wrong?
Thank you in advance.
Depending on your version of vtk package you may want to test the following syntax if your version <= 5:
Mass.SetInput(polydata.GetOutput());
Otherwise, the actual syntax is:
Mass.SetInputData(polydata.GetOutputPort());
PS: you can check the python-wrapped vtk version by running:
import vtk
print vtk.vtkVersion.GetVTKSourceVersion()
You have assigned reader.GetOutput() in polydata. From polydata, I believe you need to do, polydata.GetOutputPort()
I guess you have VTK 6 , you can provide as input to a filter either the output port of a filter or a vtkDataObject :
Mass.SetInputConnection(reader.GetOutputPort())
Mass.SetInputData(polydata) #that is Mass.SetInputData(reader.GetOutput())
For understanding why these method are not equivalent when updating a pipeline, and for comparison with the previous version, see http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Removal_of_GetProducerPort http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Replacement_of_SetInput
I am trying to read raw image data from a cr2 (canon raw image file). I want to read the data only (no header, etc.) pre-processed if possible (i.e pre-bayer/the most native unprocessed data) and store it in a numpy array. I have tried a bunch of libraries such as opencv, rawkit, rawpy but nothing seems to work correctly.
Any suggestion on how I should do this? What I should use? I have tried a bunch of things.
Thank you
Since libraw/dcraw can read cr2, it should be easy to do. With rawpy:
#!/usr/bin/env python
import rawpy
raw = rawpy.imread("/some/path.cr2")
bayer = raw.raw_image # with border
bayer_visible = raw.raw_image_visible # just visible area
Both bayer and bayer_visible are then a 2D numpy array.
You can use rawkit to get this data, however, you won't be able to use the actual rawkit module (which provides higher level APIs for dealing with Raw images). Instead, you'll want to use mostly the libraw module which allows you to access the underlying LibRaw APIs.
It's hard to tell exactly what you want from this question, but I'm going to assume the following: Raw bayer data, including the "masked" border pixels (which aren't displayed, but are used to calculate various things about the image). Something like the following (completely untested) script will allow you to get what you want:
#!/usr/bin/env python
import ctypes
from rawkit.raw import Raw
with Raw(filename="some_file.CR2") as raw:
raw.unpack()
# For more information, see the LibRaw docs:
# http://www.libraw.org/docs/API-datastruct-eng.html#libraw_rawdata_t
rawdata = raw.data.contents.rawdata
data_size = rawdata.sizes.raw_height * rawdata.sizes.raw_width
data_pointer = ctypes.cast(
rawdata.raw_image,
ctypes.POINTER(ctypes.c_ushort * data_size)
)
data = data_pointer.contents
# Grab the first few pixels for demonstration purposes...
for i in range(5):
print('Pixel {}: {}'.format(i, data[i]))
There's a good chance that I'm misunderstanding something and the size is off, in which case this will segfault eventually, but this isn't something I've tried to make LibRaw do before.
More information can be found in this question on the LibRaw forums, or in the LibRaw struct docs.
Storing in a numpy array I leave as an excersize for the user, or for a follow up answer (I have no experience with numpy).