I have a subsurface temperature data upto 300m oceanic depth (having irregular depth). And I want to calculate ocean heat content for 0-300m in Python. The cell area is being computed by CDO tool.
The formula is:
OHC = sea water density * Specific heat capacity * integrating the temperature over this depth.
I am able to write a code.
#OHC Calculation
def ocean_heat(Temperature,cell Area):
density = 1026 #kg/m^3
c_p = 3990 #J/(kg K)
heat = Temperature.sum(dim=['depth','lon','lat']) * density * c_p * cell Area
return heat
But, the depth is not on same interval. So I think there is need to use weighted temperature. So if anyone can help to know the proper procedure to compute OHC. And if there is another sources or modules then please let me know.
Thank you.
If your dataset is a NetCDF file I suggest taking a look at the Xarray package. It is used to work with labeled multidimensional arrays. It is very popular in Earth Science.
Here is an example from Pangeo using Xarray to calculate ocean heat content:
https://gallery.pangeo.io/repos/NCAR/notebook-gallery/notebooks/Run-Anywhere/Ocean-Heat-Content/OHC_tutorial.html
The first part is about speeding up the computation with Dask. Task 8 is where they start calculating ocean heat content.
Related
I am trying to create a 3d surface plot like this, link available here :
https://plotly.com/python/3d-surface-plots/
But the problem is that I only have limited data available where I only have data for the peak location and the height of peak but the rest of the data is missing. In the example z-data need 25 X 25 values 625 data points to generate a valid surface plot.
My data looks something like this:
So my question is that, is it possible to use some polynomial function with the peak location value as a constrain to generate Z-data based on the information I have?
Open to any discussion. Any form of suggestion is appreciated.
Though I don't like this form of interpolation, which is pretty artificial, you can use the following trick:
F(P) = (Σ Fk / d(P, Pk)) / (Σ 1 / d(P, Pk))
P is the point where you interpolate and Pk are the known peak positions. d is the Euclidean distance. (This gives sharp peaks; the squared distance gives smooth ones.)
Unfortunately, far from the peaks this formula tends to the average of the Fk, giving an horizontal surface that is above some of the Fk, giving downward peaks. You can work around this by adding fake peaks of negative height around your data set, to lower the average.
Context: I am trying to model Ocean surface waves, I have played using a linear superposition of sinusoids and also on the fact that phase speed (following a wave's crest) is twice as fast as group speed (following a group of waves). Finally, I more recently used such generation of a model of the random superposition of waves to model the wigggling lines formed by the refraction (bendings of the trajectory of a ray at the interface between air and water) of light, called caustics... So far so good...
Observing real-life ocean waves instructed me that if a single wave is well approximated by a sinusoidal waveform, each wave is qualitatively a bit different. Typical for these surface gravity waves are the sharp crests and flat troughs. As a matter of fact, modelling ocean waves is on one side very useful (Ocean dynamics and its impact on climate, modelling tides, tsunamis, diffraction in a bay to predict coastline evolution, ...) but quite demanding despite a well known mathematical model. Starting with the Navier-Stokes equations to an incompressible fluid (water) in a gravitational field leads to Luke's variational principle in certain simplifying conditions. Further simplifications lead to the approximate solution given by Stokes which gives the following shape as the sum of different harmonics:
This seems well enough at the moment and I will capture this shape in this notebook and notably if that applies to a random mixture of such waves... However, a simple implementation such as
def stokes(pos, a=.3/2/np.pi, k=2*np.pi):
# k*a is a dimensionless expansion parameter
elevation = (1-1/16*(k*a)**2) * np.cos(k*pos)
elevation += 1/2*(k*a)**1 * np.cos(2*k*pos)
elevation += 3/8*(k*a)**2 * np.cos(3*k*pos)
return a * elevation
N_pos = 501
pos = np.linspace(0, 1, N_pos, endpoint=True)
fig, ax = plt.subplots(figsize=(fig_width, fig_width/phi**3))
ax.plot(pos, stokes(pos))
ax.set_xlim(np.min(pos), np.max(pos));
fig, ax = plt.subplots(figsize=(fig_width, fig_width/phi**3))
for a in np.linspace(.001, .2, 10, endpoint=True):
elevation = stokes(pos, a=a)
ax.plot(pos, elevation)
ax.set_xlim(np.min(pos), np.max(pos));
Gives something similar to:
(The full implementation gives more details.)
My present implementation seem to not fit what is displayed on the wikipedia page and that I do not spot the bug I may have introduced... Does someone spot the bug?
I have several equations as follows:
windGust8 = -53.3 + (28.3 * log(windSpeed))
windGust7to8 = -70.0 + (30.8 * log(windSpeed))
windGust6to7 = -29.2 + (17.7 * log(windSpeed))
windGust6 = -32.3 + (16.7 * log(windSpeed))
where windSpeed is the wind speed from a model at 850mb.
I then use lapse rates from a model to determine which equation to use as such:
windGustTemp = where(greater(lapseRate,8.00),windGust8,windGustTemp)
windGustTemp = where(logical_and(less_equal(lapseRate,8.00),greater(lapseRate,7.00)),windGust7to8,windGustTemp)
windGustTemp = where(logical_and(less_equal(lapseRate,7.00),greater(lapseRate,6.00)),windGust6to7,windGustTemp)
windGustTemp = where(less_equal(lapseRate,6.00),windGust6,windGustTemp)
return windGustTemp
When I plot these equations as filled contours on a map there are sharp gradients as you would expect. What I would like to do is interpolate between these equations and maybe smooth a little to give a clean look to the graphic. I assume I would use scipy.interpolate.interp1d, but I am not sure how I would apply that to this circumstance. Any help would be much appreciated! Thanks!
EDIT to add output image example
This is an example of the output image generated currently. You can see how abrupt the edges are. I want to interpolate and smooth out this data.
There are data points every 1 km with the model data. For each point it is first determined what the lapse rate is...let's say 7.4. Then based on the logic equations above, it knows to use the equation associated with the lapse rate between 7 and 8. It then finds the model wind speed for that point, plugs it into the equation and gets a number that it plots on the map. This is done for all the model data points and generates the above image.
I'm looking at 2d (depth / rgbd) to 3d point cloud projection. Having spent the past few days reading up I have a vague understanding of projection matrices and looking at the functions provided by open3d I believe 2d > 3d projection can be done using the below code:
z = d / depth_scale
x = (u - cx) * z / fx
y = (v - cy) * z / fy
This works when depth is calculated from a solid plane. In my case depth is calculated from a central point (pinhole camera) and as such the projection extrudes away from the central point.
For example:
In image 1 the lengths of A,B,C may be 11m, 10m, 11m whilst in image 2 they will be all equal at 10m, 10m, 10m (these measurements are not accurate, just to prove the point).
The result is that using measurements from image 1, with the above formula, will give the perception that the object is curved (away from the center). How can I amend the formula to adjust for this?
If anyone has examples in Python that would be super useful.
Thank you
1) Check what your definition of 'depth' is. It would appear your depth is measured as the distance to the principal plane, rather than the distance to the camera center. This would explain why your reconstructions look 'curved'
2) If you have read about projection matrices, then a good way to approach the problem is to use them. https://towardsdatascience.com/inverse-projection-transformation-c866ccedef1c
(Recommended book: Multiple view geometry, by Hartley & Zisserman)
I need calculate the solar zenith angle for approximately 106.000.000 of different coordinates. This coordinates are referrals to the pixels from an image projected at Earth Surface after the image had been taken by camera into the airplane.
I am using the pvlib.solarposition.get_position() to calculate the solar zenith angle. The values returned are being calculated correctly (I compared some results with NOOA website) but, how I need calculate the solar zenith angle for many couple of coordinates, the python is spending many days (about 5 days) to finish the execution of the function.
How I am a beginner in programming, I wonder is there is any way to accelerate the solar zenith angle calculation.
Below found the part of the code implemented which calculate the solar zenith angle:
sol_apar_zen = []
for i in range(size3):
solar_position = np.array(pvl.solarposition.get_solarposition(Data_time_index, lat_long[i][0], lat_long[i][1]))
sol_apar_zen.append(solar_position[0][0])
print(len(sol_apar_zen))
Technically, if you need to compute Solar Zenith Angle quickly for a large list (array), there are more efficient algorithms than the PVLIB's one. For example, the one described by Roberto Grena in 2012 (https://doi.org/10.1016/j.solener.2012.01.024).
I found a suitable implementation here: https://github.com/david-salac/Fast-SZA-and-SAA-computation (you mind need some tweaks, but it's simple to use it, plus it's also implemented for other languages than Python like C/C++ & Go).
Example of how to use it:
from sza_saa_grena import solar_zenith_and_azimuth_angle
# ...
# A random time series:
time_array = pd.date_range("2020/1/1", periods=87_600, freq="10T", tz="UTC")
sza, saa = solar_zenith_and_azimuth_angle(longitude=-0.12435, # London longitude
latitude=51.48728, # London latitude
time_utc=time_array)
That unit-test (in the project's folder) shows that in the normal latitude range, an error is minimal.
Since your coordinates represent a grid, another option would be to calculate the zenith angle for a subset of your coordinates, and the do a 2-d interpolation to obtain the remainder. 1 in 100 in both directions would reduce your calculation time by a factor of 10000.
If you want to fasten up this calculation you can use the numba core (if installed)
location.get_solarposition(
datetimes,
method='nrel_numba'
)
Otherwise you have to implement your own calculation based on vectorized numpy arrays. I know it is possible but I am not allowed to share. You can find the formulation if you search for spencer 1971 solar position