open3d compute distance between mesh and point cloud - python

For a study project, I try to get into point cloud comparison.
to keep it short, I have a CAD file (.stl) and several point clouds created by a laser scanner.
now I want to calculate the difference between the CAD file and each point cloud.
first I started with Cloud Compare which helps a lot to get a basic understanding. (reduction of points, remove duplicates, create a mesh and compare distances)
In python, I was able to import the files and do some basic calculations. However, I am not able to calculate the distance.
here is my code:
import numpy as np
import open3d as o3d
#read point cloud
dataname_pcd= "pcd.xyz"
point_cloud = np.loadtxt(input_path+dataname_pcd,skiprows=1)
#read mesh
dataname_mesh = "cad.stl"
mesh = o3d.io.read_triangle_mesh(input_path+dataname_mesh)
print (mesh)
#calulate the distance
mD = o3d.geometry.PointCloud.compute_point_cloud_distance([point_cloud],[mesh])
#calculate the distance gives me this error:
"TypeError: compute_point_cloud_distance(): incompatible function arguments. The following argument types are supported:
1. (self: open3d.cpu.pybind.geometry.PointCloud, target: open3d.cpu.pybind.geometry.PointCloud) -> open3d.cpu.pybind.utility.DoubleVector"
Questions:
what pre transformations for mesh and point clouds are needed to calculate their distances?
is there a recommended way to display the differences?
so far I just used the visualization line below
o3d.visualization.draw_geometries([pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])

You need 2 point clouds for the function "compute point cloud distance()", but one of your geometries is a mesh, which is made of polygons and vertices. Just convert it to a point cloud:
pcd = o3d.geometry.PointCloud() # create a empty geometry
pcd.points = mesh.vertices # take the vertices of your mesh
I'll illustrate how you can visualize the distances between 2 clouds, both captured on a moving robot (a Velodyne LIDAR) separeted by 1 meter in average. Consider 2 cloud before and after the registration, the distances between them should decrease, right? Here is some code:
import copy
import pandas as pd
import numpy as np
import open3d as o3d
from matplotlib import pyplot as plt
# Import 2 clouds, paint and show both
pc_1 = o3d.io.read_point_cloud("scan_0.pcd") # 18,421 points
pc_2 = o3d.io.read_point_cloud("scan_1.pcd") # 19,051 points
pc_1.paint_uniform_color([0,0,1])
pc_2.paint_uniform_color([0.5,0.5,0])
o3d.visualization.draw_geometries([pc_1,pc_2])
# Calculate distances of pc_1 to pc_2.
dist_pc1_pc2 = pc_1.compute_point_cloud_distance(pc_2)
# dist_pc1_pc2 is an Open3d object, we need to convert it to a numpy array to
# acess the data
dist_pc1_pc2 = np.asarray(dist_pc1_pc2)
# We have 18,421 distances in dist_pc1_pc2, because cloud pc_1 has 18,421 pts.
# Let's make a boxplot, histogram and serie to visualize it.
# We'll use matplotlib + pandas.
df = pd.DataFrame({"distances": dist_pc1_pc2}) # transform to a dataframe
# Some graphs
ax1 = df.boxplot(return_type="axes") # BOXPLOT
ax2 = df.plot(kind="hist", alpha=0.5, bins = 1000) # HISTOGRAM
ax3 = df.plot(kind="line") # SERIE
plt.show()
# Load a previos transformation to register pc_2 on pc_1
# I finded it with the Fast Global Registration algorithm, in Open3D
T = np.array([[ 0.997, -0.062 , 0.038, 1.161],
[ 0.062, 0.9980, 0.002, 0.031],
[-0.038, 0.001, 0.999, 0.077],
[ 0.0, 0.0 , 0.0 , 1.0 ]])
# Make a copy of pc_2 to preserv the original cloud
pc_2_copy = copy.deepcopy(pc_2)
# Aply the transformation T on pc_2_copy
pc_2_copy.transform(T)
o3d.visualization.draw_geometries([pc_1,pc_2_copy]) # show again
# Calculate distances
dist_pc1_pc2_transformed = pc_1.compute_point_cloud_distance(pc_2_copy)
dist_pc1_pc2_transformed = np.asarray(dist_pc1_pc2_transformed)
# Do as before to show diferences
df_2 = pd.DataFrame({"distances": dist_pc1_pc2_transformed})
# Some graphs (after registration)
ax1 = df_2.boxplot(return_type="axes") # BOXPLOT
ax2 = df_2.plot(kind="hist", alpha=0.5, bins = 1000) # HISTOGRAM
ax3 = df_2.plot(kind="line") # SERIE
plt.show()

Related

Wrong points buffer using geopandas

Good evening,
I'm working on a product to detect local events (strike) within subscription areas.
The yellow polygons should be 40KM (left) and 50KM (right) circles around central red points. Green points are my strikes that should be detected in my process.
It appears that my current use of buffer() does not produce 40/50 Km buffer radius as expected and then my process in missing my two events .
My code:
# Create my two events to detect
df_strike = pd.DataFrame(
{ 'Latitude': [27.0779, 31.9974],
'Longitude': [51.5144, 38.7078]})
gdf_events = gpd.GeoDataFrame(df_strike, geometry=gpd.points_from_xy(df_strike.Longitude, df_strike.Latitude),crs = {'init':'epsg:4326'})
# Get location to create buffer
SUB_LOCATION = pd.DataFrame(
{ 'perimeter_id': [1370, 13858],
'distance' : [40.0, 50.0],
'custom_lat': [31.6661, 26.6500],
'custom_lon': [38.6635, 51.5700]})
gdf_locations = gpd.GeoDataFrame(SUB_LOCATION, geometry=gpd.points_from_xy(SUB_LOCATION.custom_lon, SUB_LOCATION.custom_lat), crs = {'init':'epsg:4326'})
# Now reproject to a crs using meters
gdf_locations = gdf_locations.to_crs({'init':'epsg:3857'})
gdf_events = gdf_events.to_crs({'init':'epsg:3857'})
# Create buffer using distance (in meters) from locations
gdf_locations['geometry'] = gdf_locations['geometry'].buffer(gdf_locations['distance']*1000)
# Matching events within buffer
matching_entln = pd.DataFrame(gpd.sjoin(gdf_locations, gdf_events, how='inner'))
But my result is an empty dataframe and should not be. If I compute distance between events and locations (distance between red and green points):
pnt1 = Point(27.0779, 51.5144)
pnt2 = Point(26.65, 51.57)
points_df = gpd.GeoDataFrame({'geometry': [pnt1, pnt2]}, crs='EPSG:4326')
points_df = points_df.to_crs('EPSG:3857')
points_df2 = points_df.shift() #We shift the dataframe by 1 to align pnt1 with pnt2
points_df.distance(points_df2)
Returns: 48662.078723 meters
and
pnt1 = Point(31.9974, 38.7078)
pnt2 = Point(31.6661, 38.6635)
points_df = gpd.GeoDataFrame({'geometry': [pnt1, pnt2]}, crs='EPSG:4326')
points_df = points_df.to_crs('EPSG:3857')
points_df2 = points_df.shift() #We shift the dataframe by 1 to align pnt1 with pnt2
points_df.distance(points_df2)
Returns: 37417.343796 meters
Then I was expecting to have this result :
>>> pd.DataFrame(gpd.sjoin(gdf_locations, gdf_events, how='inner'))
subscriber_id perimeter_id distance custom_lat custom_lon geometry index_right Latitude Longitude
0 19664 1370 40.0 31.6661 38.6635 POLYGON ((2230301.324 3642618.584, 2230089.452... 1 31.9974 38.7078
1 91201 13858 50.0 26.6500 51.5700 POLYGON ((3684499.890 3347425.378, 3684235.050... 0 27.0779 51.5144
I think my buffer is at ~47KM and ~38KM instead of 50KM and 40KM as expected. Am I missing something here which could explain that empty result ?
Certain computations with geodataframe's methods that involves distances, namely, .distance(), .buffer() in this particular case, are based on Euclidean geometry and map projection coordinate systems. Their results are not reliable, to always get the correct results one should avoid using them and use direct computation with geographic coordinates instead. Doing so with proper module/library, you will get great-circle arc distances instead of errorneous euclidean distances. Thus avoid mysterious errors.
Here I present the runnable code that show how to proceed along the line that I proposed:
import pandas as pd
import geopandas as gpd
from shapely.geometry import Polygon
import cartopy.crs as ccrs
import cartopy
import matplotlib.pyplot as plt
import numpy as np
from pyproj import Geod
# Create my two events to detect
df_strike = pd.DataFrame(
{ 'Latitude': [27.0779, 31.9974],
'Longitude': [51.5144, 38.7078]})
gdf_events = gpd.GeoDataFrame(df_strike, geometry=gpd.points_from_xy(df_strike.Longitude, df_strike.Latitude),crs = {'init':'epsg:4326'})
# Get location to create buffer
SUB_LOCATION = pd.DataFrame(
{ 'perimeter_id': [1370, 13858],
'distance' : [40.0, 50.0],
'custom_lat': [31.6661, 26.6500],
'custom_lon': [38.6635, 51.5700]})
gdf_locations = gpd.GeoDataFrame(SUB_LOCATION, geometry=gpd.points_from_xy(SUB_LOCATION.custom_lon, SUB_LOCATION.custom_lat), crs = {'init':'epsg:4326'})
# Begin: My code----------------
def point_buffer(lon, lat, radius_m):
# Use this instead of `.buffer()` provided by geodataframe
# Adapted from:
# https://stackoverflow.com/questions/31492220/how-to-plot-a-tissot-with-cartopy-and-matplotlib
geod = Geod(ellps='WGS84')
num_vtxs = 64
lons, lats, _ = geod.fwd(np.repeat(lon, num_vtxs),
np.repeat(lat, num_vtxs),
np.linspace(360, 0, num_vtxs),
np.repeat(radius_m, num_vtxs),
radians=False
)
return Polygon(zip(lons, lats))
# Get location to create buffer
# Create buffer geometries from points' coordinates and distances using ...
# special function `point_buffer()` defined above
gdf_locations['geometry'] = gdf_locations.apply(lambda row : point_buffer(row.custom_lon, row.custom_lat, 1000*row.distance), axis=1)
# Convert CRS to Mercator (epsg:3395), it will match `ccrs.Mercator()`
# Do not use Web_Mercator (epsg:3857), it is crude approx of 3395
gdf_locations = gdf_locations.to_crs({'init':'epsg:3395'})
gdf_events = gdf_events.to_crs({'init':'epsg:3395'})
# Matching events within buffer
matching_entln = pd.DataFrame(gpd.sjoin(gdf_locations, gdf_events, how='inner'))
# Visualization
# Use cartopy for best result
fig = plt.figure(figsize=(9,8))
ax = fig.add_subplot(projection=ccrs.Mercator())
gdf_locations.plot(color="green", ax=ax, alpha=0.4)
gdf_events.plot(color="red", ax=ax, alpha=0.9, zorder=23)
ax.coastlines(lw=0.3, color="gray")
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True)
# Other helpers
# Horiz/vert lines are plotted to mark the circles' centers
ax.hlines([31.6661,26.6500], 30, 60, transform=ccrs.PlateCarree(), lw=0.1)
ax.vlines([38.6635, 51.5700], 20, 35, transform=ccrs.PlateCarree(), lw=0.1)
ax.set_extent([35, 55, 25, 33], crs=ccrs.PlateCarree())
Spatial joining:
# Matching events within buffer
matching_entln = pd.DataFrame(gpd.sjoin(gdf_locations, gdf_events, how='inner'))
matching_entln[["perimeter_id", "distance", "index_right", "Latitude", "Longitude"]] #custom_lat custom_lon
Compute distances between points for checking
This checks the result of the spatial join if computed distances are less than the buffered distances.
# Use greatcircle arc length
geod = Geod(ellps='WGS84')
# centers of buffered-circles
from_lon1, from_lon2 = [38.6635, 51.5700]
from_lat1, from_lat2 = [31.6661, 26.6500]
# event locations
to_lon1, to_lon2= [51.5144, 38.7078]
to_lat1, to_lat2 = [27.0779, 31.9974]
_,_, dist_m = geod.inv(from_lon1, from_lat1, to_lon2, to_lat2, radians=False)
print(dist_m) #smaller than 40 km == inside
# Get: 36974.419811328786 m.
_,_, dist_m = geod.inv(from_lon2, from_lat2, to_lon1, to_lat1, radians=False)
print(dist_m) #smaller than 50 km == inside
# Get: 47732.76744655724 m.
My notes
Serious geographic computation should be done directly with geodetic computation without the use of map projection of any kind.
Map projection is used when you need graphic visualization. But correct geographic values that are computed/transformed to map projection CRS correctly are expected.
Computation with map projection (grid) coordinate beyond its allowable limits (and get bad results) is often happen with inexperienced users.
Computation involving map/grid position/values using euclidean geometry should be performed within small extent of projection areas that all kinds of map distortions is very low.

Struggling to create watertight meshes out of point cloud data using Open3D in Python

I am trying to create a watertight mesh out of point cloud representing organ contour data from cone beam CT images. My goal is to take two meshes and calculate the volume of intersection between the two of them.
I have tried using each of the methods shown here
Poisson Reconstruction
point_cloud = np.genfromtxt('ct_prostate_contour_data.csv', delimiter=',')
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.compute_convex_hull()
pcd.estimate_normals()
pcd.orient_normals_consistent_tangent_plane(10)
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(pcd, depth=10, width=0, scale=20, linear_fit=True)[0]
mesh.compute_vertex_normals()
mesh.paint_uniform_color([0.5, 0.5, 0.5])
mesh.remove_degenerate_triangles()
o3d.visualization.draw_geometries([pcd, mesh], mesh_show_back_face=True)
While this method seemingly leads to a watertight mesh to my eye, the result of mesh.is_watertight() is False, however for the Bladder data it returns True. Furthermore, the algorithm extends the mesh above and below the vertical limits of the data. Wile this isn't a deal breaking issue if there were a way to minimize it that would be great.
Poisson Mesh Image
Ball Pivoting
point_cloud = np.genfromtxt('ct_prostate_contour_data.csv', delimiter=',')
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(point_cloud)
pcd.compute_convex_hull()
pcd.estimate_normals()
pcd.orient_normals_consistent_tangent_plane(30)
distances = pcd.compute_nearest_neighbor_distance()
avg_dist = np.mean(distances)
radii = [0.1*avg_dist, 0.5*avg_dist, 1*avg_dist, 2*avg_dist]
r = o3d.utility.DoubleVector(radii)
rec_mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(pcd, r)
o3d.visualization.draw_geometries([pcd, rec_mesh], mesh_show_back_face=True)
This would be my preferred method if I were able to fill the holes as it simply connects vertices without interpolation. Perhaps if I were able to get this into a state where the only remaining holes were large I could convert this mesh into a Pyvista compatible mesh and use Pymeshfix to patch the holes.
Ball Pivoting Mesh Image
Alpha Shapes
point_cloud = np.genfromtxt('ct_prostate_contour_data.csv', delimiter=',')
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(point_cloud)
alpha = 8
tetra_mesh, pt_map = o3d.geometry.TetraMesh.create_from_point_cloud(pcd)
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_alpha_shape(pcd, alpha, tetra_mesh, pt_map)
mesh.compute_vertex_normals()
mesh.paint_uniform_color([0.5, 0.5, 0.5])
mesh.remove_degenerate_triangles()
o3d.visualization.draw_geometries([pcd, mesh])
The results from this are similar to ball pivoting but worse.
Alpha Shapes Mesh Image
Sample Data
ct_prostate_contour_data.csv
ct_rectum_contour_data.csv
ct_bladder_contour_data.csv
I am one of the authors of the PyVista module. We've introduced the vtkSurfaceReconstructionFilter within PyVista in pull request #1617.
import pymeshfix
import numpy as np
import pyvista as pv
pv.set_plot_theme('document')
array = np.genfromtxt('ct_prostate_contour_data.csv', delimiter=',')
point_cloud = pv.PolyData(array)
surf = point_cloud.reconstruct_surface(nbr_sz=20, sample_spacing=2)
mf = pymeshfix.MeshFix(surf)
mf.repair()
repaired = mf.mesh
pl = pv.Plotter()
pl.add_mesh(point_cloud, color='k', point_size=10)
pl.add_mesh(repaired)
pl.add_title('Reconstructed Surface')
pl.show()

Optimise 3d surface recontruction from a point cloud

I am quite new to o3d, I'd like someone to show me around based on my code :)
I am trying to reconstruct a surface from (few) experimental data. I'd like to have as much flexibility/tunability as possible.
My code is something like this:
import open3d as o3d
sys.path.append('..')
output_path=(r"C:\Users\Giammarco\Desktop\PYTHON_graphs\OUTPUTS\\")
poisson_mesh=[]
densities=[]
pcd = o3d.geometry.PointCloud()
pcd.normals = o3d.utility.Vector3dVector(np.zeros((1, 3))) # invalidate existing normals
#load the point cloud
point_cloud=np.array([x,y,z]).T
cloud = PyntCloud.from_instance("open3d", pcd)
pcd.points = o3d.utility.Vector3dVector(point_cloud)
#resise the scale of the sample
vox_grid = o3d.geometry.VoxelGrid.create_from_point_cloud(pcd, 1.)
#presetn in all approaches of plc
kdtree = cloud.add_structure("kdtree")
testc = cloud.get_neighbors(k=5)
distances = pcd.compute_nearest_neighbor_distance()
avg_dist = np.mean(distances)
#compute the normals
pcd.estimate_normals(); #mandatory
#orient the normals
#Number of nearest neighbours: 5 is the minimum to have a closed surface with scale >= 2
pcd.orient_normals_consistent_tangent_plane(7)
#Poisson algorithm
poisson_mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(pcd, depth=9, width=0, scale=3.5, linear_fit=False)
bbox = pcd.get_axis_aligned_bounding_box()
p_mesh_crop = poisson_mesh.crop(bbox)
# cleaning
# p_mesh_crop =poisson_mesh.simplify_quadric_decimation(6000)
# p_mesh_crop.remove_unreferenced_vertices
# p_mesh_crop.remove_degenerate_triangles()
# p_mesh_crop.remove_duplicated_triangles()
# p_mesh_crop.remove_duplicated_vertices()
# p_mesh_crop.remove_non_manifold_edges()
#designing the surface colour
#densities are the real density of features
densities = np.asarray(densities)
density_colors = plt.get_cmap('viridis')((dgo - dgo.min()) / (dgo.max() - dgo.min()))
density_colors = density_colors[:, :3]
#works for the plotting in o3d
poisson_mesh.vertex_colors = o3d.utility.Vector3dVector(density_colors)
o3d.io.write_triangle_mesh(output_path+"bpa_mesh.ply", dec_mesh);
o3d.io.write_triangle_mesh(output_path+"p_mesh_c.ply", poisson_mesh);
# o3d.io.write_triangle_mesh(output_path+"p_mesh_c.ply", p_mesh_crop);
# my_lods = lod_mesh_export(p_mesh_crop, [100000,50000,10000,1000,100], ".ply", output_path)
my_lods = lod_mesh_export(poisson_mesh, [100000,50000,10000,1000,100], ".ply", output_path)
# o3d.visualization.draw_geometries([pcd, p_mesh_crop], mesh_show_back_face=True)
# o3d.visualization.draw_geometries([pcd, poisson_mesh],mesh_show_back_face=True)
# o3d.visualization.draw_geometries([pcd, poisson_mesh[100000]],point_show_normal=True)
# tri_mesh_pois.show()#designing the surface colour
#densities are the real density of features
densities = np.asarray(densities)
density_colors = plt.get_cmap('viridis')((dgo - dgo.min()) / (dgo.max() - dgo.min()))
density_colors = density_colors[:, :3]
#works for the plotting in o3d
poisson_mesh.vertex_colors = o3d.utility.Vector3dVector(density_colors)
o3d.io.write_triangle_mesh(output_path+"p_mesh_c.ply", poisson_mesh);
my_lods = lod_mesh_export(poisson_mesh, [100000,50000,10000,1000,100], ".ply", output_path)
#SHORTCUTS from keyboard: n = show normals, q = quit, w = mesh
o3d.visualization.draw_geometries([pcd, poisson_mesh],mesh_show_back_face=True)
Some outputs:
I am concern about the create_from_point_cloud_poisson fit model option: is there a way to tune its parameters more than just depth and size? Is there an iterative process that I should set up for better conversion (e.g. threshold)? As you can see the distance btw the calculated surface and the experimental point is quite big.
Is the estimation of normals properly set up? In the second output, some directions are still quite random.
I tried this syntax too: pcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.05,max_nn=20)); , but It doesn't converge to a closed surface, just a plane (see below)
Please, give me feedbacks on my code and suggestions on how to improve it.
Thank you for your support!

Gaussian Mixture Model with discrete data

I have 136 numbers which have an overlapping distribution of 8 Gaussian distributions. I want to find it's means, and variances with each Gaussian distribution! Can you find any mistakes with my code?
file = open("1.txt",'r') #data is in 1.txt like 0,0,0,0,0,0,1,0,0,1,4,4,6,14,25,43,71,93,123,194...
y=[int (i) for i in list((file.read()).split(','))] # I want to make list which element is above data
x=list(range(1,len(y)+1)) # it is x values
z=list(zip(x,y)) # z elements consist as (1, 0), (2, 0), ...
Therefore, through the above process, for the 136 points (x,y) on the xy plane having the first given data as y values, a list z using this as an element was obtained.
Now I want to obtain each Gaussian distribution's mean, variance. At this time, the basic assumption is that the given data consists of overlapping 8 Gaussian distributions.
import numpy as np
from sklearn.mixture import GaussianMixture
data = np.array(z).reshape(-1,1)
model = GaussianMixture(n_components=8).fit(data)
print(model.means_)
file.close()
Actually, I don't know how to make it's code to print 8 means and variances... Anyone can help me?
You can use this, I have made a sample code for your visualizations -
import numpy as np
from sklearn.mixture import GaussianMixture
import scipy
import matplotlib.pyplot as plt
%matplotlib inline
#Sample data
x = [0,0,0,0,0,0,1,0,0,1,4,4,6,14,25,43,71,93,123,194]
num_components = 2
#Fit a model onto the data
data = np.array(x).reshape(-1,1)
model = GaussianMixture(n_components=num_components).fit(data)
#Get list of means and variances
mu = np.abs(model.means_.flatten())
sd = np.sqrt(np.abs(model.covariances_.flatten()))
#Plotting
extend_window = 50 #this is for zooming into or out of the graph, higher it is , more zoom out
x_values = np.arange(data.min()-extend_window, data.max()+extend_window, 0.1) #For plotting smooth graphs
plt.plot(data, np.zeros(data.shape), linestyle='None', markersize = 10.0, marker='o') #plot the data on x axis
#plot the different distributions (in this case 2 of them)
for i in range(num_components):
y_values = scipy.stats.norm(mu[i], sd[i])
plt.plot(x_values, y_values.pdf(x_values))

How to separate two regions of latlon data on python

I am currently working with BUFR files with wind data. When I read this file on python I get 4 large vectors, latitude vector, longitude vector, wind_direction vector, and wind_speed vector.
Both wind vectors are masked python arrays because there is non-valid data. This happens because the data comes from a non-geostationary satellite. In fact I successfully generated the following image from this BUFR file to show you the general shape that the data takes.
In this image I have plotted a color field to represent the wind speed, while the arrows obviously represent the wind direction.
Please notice the two bands of actual data. Unfortunately the way I am plotting the data, generates a third band (where the color field is smooth), in-between the actual data bands. This is an artefact of the function pcolormesh. If I could superimpose two `pcolormesh plots, each one representing one of the bands, this problem would disappear.
Unfortunately, I do not know how I could separate the data "regions". I have thought about clustering techniques but do not know how to cluster along latlon data using ANOTHER array (the wind data) as the clustering rule.
This is my current code:
#!/usr/bin/python
import bufr
import numpy as np
import sys
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
from matplotlib import mlab
WIND_DIR_INDEX = 97
WIND_SPEED_INDEX = 96
bfrfile = sys.argv[1]
print bfrfile
bfr = bufr.BUFRFile(bfrfile)
lon = []
lat = []
wind_d = []
wind_s = []
for record in bfr:
for entry in record:
if entry.index == WIND_DIR_INDEX:
wind_d.append(entry.data)
if entry.index == WIND_SPEED_INDEX:
wind_s.append(entry.data)
if entry.name.find("LONGITUDE") == 0:
lon.append(entry.data)
if entry.name.find("LATITUDE") == 0:
lat.append(entry.data)
lons = np.concatenate(lon)
lats = np.concatenate(lat)
winds_d = np.concatenate(wind_d)
winds_s = np.concatenate(wind_s)
winds_d = np.ma.masked_greater(winds_d,1.0e+6)
winds_s = np.ma.masked_greater(winds_s,1.0e+6)
windu = np.cos((winds_d-180)*(np.pi/180))
windv = np.sin((winds_d-180)*(np.pi/180))
# Data interpolation for pcolormesh (needs gridded data)
xi = np.linspace(lons.min(),lons.max(),lons.size/10)
yi = np.linspace(lats.min(),lats.max(),lats.size/10)
Z = mlab.griddata(lons,lats,winds_s,xi,yi)
X,Y = np.meshgrid(xi,yi)
mydpi = 96
fig = plt.figure(frameon=True)
fig.set_size_inches(1600/mydpi,1200/mydpi)
ax = plt.Axes(fig,[0,0,1,1])
#ax.set_axis_off()
fig.add_axes(ax)
plt.hold(True);
plt.quiver(lons[::5],lats[::5],windu[::5],windv[::5],linewidths=0)
for method in (ax.set_xticks,ax.set_xticklabels,ax.set_yticks,ax.set_yticklabels):
method([])
fig.savefig('/home/cendas/bin/python/bufr_ascat.png',bbox_inches=0,dpi=5*mydpi)
mydpi = 96
fig = plt.figure(frameon=True)
fig.set_size_inches(1600/mydpi,1200/mydpi)
ax = plt.Axes(fig,[0,0,1,1])
#ax.set_axis_off()
fig.add_axes(ax)
plt.hold(True);
try:
plt.pcolormesh(X,Y,Z,alpha=None)
plt.clim(0,10)
except ValueError:
pass
print "Warning: Empty data array."
for method in (ax.set_xticks,ax.set_xticklabels,ax.set_yticks,ax.set_yticklabels):
method([])
fig.savefig('/home/cendas/bin/python/bufr_ascat_color.png',bbox_inches=0,dpi=5*mydpi)
I then usually follow this python code with the following terminal commands to combine the images:
convert bufr_ascat.png -transparent white bufr_ascat.png
convert bufr_ascat_color.png -transparent white bufr_ascat_color.png
composite bufr_ascat.png bufr_ascat_color.png bufrascat.png
Don't abuse clustering for this.
What you need is a simple selection / filtering; not a structure discovery process.
Choose the mean of the masked data. All non-masked data left of that mean is the left part, all non-masked data on the right is the other?
Clustering is the wrong tool for this task.

Categories

Resources