Fluid flow, heat transfer and Python - python

full EDIT:
I will give some more information about the whole problem. The project is in early stage and my question is actually only about a narrow portion of the thing.
the final goal:
I am currently trying to simulate the flow of hot air around a rigid obstacle in Python. I have a steady inflow of air, the flow in the bulk is transient and turbulent. The aim of the whole exercise is to understand how
-the air flow behaves
-the obstacle heats up
-the air cools down and the air-pressure drops
done so far:
Not much, the thing is in early stage. I have a 2d rectangular domain and a circular obstacle. The mesh is getting finer at the boundary between bulk and obstacle, since that is where the interesting stuff is happening. Currently I only consider the airflow, no convection or heat transfer. I use the FEniCS software collection to solve the Navier-Stokes equation. Fenics comes with an example for an N-S solver using the Chorin projection method, I adapted this example to my setting. I model the rigid body as an area with no-slip boundary condition (i.e. I set the velocity of the air flow to zero). The solver still solves the N-S equation in that area, in particular the pressure inside the obstacle changes over time. Probably it is a better idea to avoid this and restrict the N-S solver to the bulk. But at the moment I don't think this affects the speed very much.
problem:
The thing runs quite slow. I do not mind if the final simulations takes a few days, but currently it is only 2d fluid flow around an obstacle and the mesh is not as fine as I want it to be in the end. I hoped this to be faster, as it will become a lot more complicated when the heat comes into play.
my question:
It boils down to one question:
What is a fast algorithm or method to solve the Navier-Stokes equation in Python?
I am perfectly fine with writing a solver from scratch, but this raises the same question. This morning it occured to me that the projection method is maybe not the worst idea, because it decouples the pressure and velocity upgrade, I could try to assign this to different CPU kernels.

Python would actually be a fine choice if you were writing it all from scratch. But you'll need a LOT of background to do it from scratch.
A coupled solution is a difficult problem.
It's been pointed out to me that you're using a package - FEniCS (thank you, Sven). My original answer needs some amendment. I'll start with a few questions about the physics, then turn to the package.
Incompressible Navier Stokes applies to a gas like air if the Mach number for air at that temperature is less than 0.1. Is that the case for your problem? It's probably true, but I thought I'd ask.
Navier Stokes does NOT apply to your solid obstacle. If you model the whole thing with one mesh, how are you describing the solid? Is it a high viscosity fluid? This could make the system of equations ill-conditioned and hard to solve. It would also affect the stable time step size if you're using explicit integration.
Is it a steady flow or a transient? (steady is easier) Is the flow laminar or turbulent? (laminar is easier)
It's conduction heat transfer in your solid obstacle and conduction/convection in your fluid. The fluid will have momentum and thermal boundary layers along the solid obstacle of the surface that your mesh will have to resolve. That's where the important heat transfer between the solid and fluid is happening. These will require a fine mesh local to the solid surface to resolve the transition from the boundary condition to the far field velocity and temperature. Have you taken that into account in your mesh?
It appears to me that FEniCS is using finite elements, but I don't see anything in the docs that tells me how you're supposed to couple the momentum and energy equations together.
You'll have to tell a great deal more to get decent advice here. Is there a numerical methods in physics Stackoverflow? You'll need it.

Related

Path detection and progress in the maze with live stereo3d image

I'm producing an ugv prototype. The goal is to perform the desired actions to the targets set within the maze. When I surf the Internet, the mere right to navigate in the labyrinth is usually made with a distance sensor. I want to consult more ideas than the question.
I want to navigate the labyrinth by analyzing the image from the 3d stereo camera. Is there a resource or successful method you can suggest for this? As a secondary problem, the car must start in front of the entrance of the labyrinth, see the entrance and go in, and then leave the labyrinth after it completes operations in the labyrinth.
I would be glad if you suggest a source for this problem. :)
The problem description is a bit vague, but i'll try to highlight some general ideas.
An useful assumption is that labyrinth is a 2D environment which you want to explore. You need to know, at every moment, which part of the map has been explored, which part of the map still needs exploring, and which part of the map is accessible in any way (in other words, where are the walls).
An easy initial data structure to help with this is a simple matrix, where each cell represents a square in the real world. Each cell can be then labelled according to its state, starting in an unexplored state. Then you start moving, and exploring. Based on the distances reported by the camera, you can estimate the state of each cell. The exploration can be guided by something such as A* or Q-learning.
Now, a rather subtle issue is that you will have to deal with uncertainty and noise. Sometimes you can ignore it, sometimes you don't. The finer the resolution you need, the bigger is the issue. A probabilistic framework is most likely the best solution.
There is an entire field of research of the so-called SLAM algorithms. SLAM stands for simultaneous localization and mapping. They build a map using some sort of input from various types of cameras or sensors, and they build a map. While building the map, they also solve the localization problem within the map. The algorithms are usually designed for 3d environments, and are more demanding than the simpler solution indicated above, but you can find ready to use implementations. For exploration, something like Q-learning still have to be used.

Best approach to mapping interior point cloud with LIDAR

Recently started playing with and built a 3D LIDAR using an Arduino, 2 servos and a Garmin Lite 3 LIDAR. Stationary mapping works great, but now I would like to move into interior mapping with a handheld unit. With an exterior unit I would of course rely on GPS, but what is the best approach for obtaining a decent interior point cloud?
I could of course rely on additional sensors to "map" the movement of the unit—but I would assume that the result would not be that great—or, and this solution I personally would have a harder time implementing, plot points based off of the the change of existing plot (i.e. the unit identifies that it is moving away from a corner of the room).
Any tips, example, etc. would be appreciated. Cheers!
Indoor mobile mapping is often done with Simultaneous Localization And Mapping (SLAM). SLAM algorithms and their implementations is an area of active research; one project to check out is OpenSLAM. They provide source code that could be used to build your own SLAM solution, and their paper (pdf) includes more background and the results of some real-world tests.
In terms of additional hardware you will need, an Inertial Measurement Unit (IMU) provides information about the attitude and acceleration of your system. These are more-or-less a requirement for all mobile systems, whether in a GNSS-denied environment or not.
Good luck!

Calculating a trajectory between two known points and an IMU

Query:
I want to estimate the trajectory of a person wearing an IMU between point a and point b. I know the exact location of point a and point b in an x,y,z space and the time it takes the person to walk between the points.
Is it possible to reconstruct the trajectory of the person moving from point a to point b using the data from an IMU and the time?
This question is too broad for SO. You could write a PhD thesis answering it, and I know people who have.
However, yes, it is theoretically possible.
However, there are a few things you'll have to deal with:
Your system is going to discretize time on some level. The result is that your estimate of position will be non-smooth. Increasing sampling rates is one way to address this, but this frequently increases the noise of the measurement.
Possible paths are non-unique. Knowing the time it takes to travel from a-b constrains slightly the information from the IMUs, but you are still left with an infinite family of possible routes between the two. Since you mention that you're considering a person walking between two points with z-components, perhaps you can constrain the route using knowledge of topography and roads?
IMUs function by integrating accelerations to velocities and velocities to positions. If the accelerations have measurement errors, and they always do, then the error in your estimate of the position will grow over time. The longer you run the system for, the more the results will diverge. However, if you're able to use roads/topography as a constraint, you may be able to restart the integration from known points in space; that is, if you can detect 90 degree turns on a street grid, each turn gives you the opportunity to tie the integrator back to a feasible initial condition.
Given the above, perhaps the most important question you have to ask yourself is how much error you can tolerate in your path reconstruction. Low-error estimates are going to require better (i.e. more expensive) sensors, higher sampling rates, and higher-order integrators.

Algorithm to smooth out noise in a running system while converging on an initially unknown constant

I'm trying to smooth out noise in a slightly unusual situation. There's probably a common algorithm to solve my problem, but I don't know what it is.
I'm currently building a robotic telescope mount. To track the movement of the sky, the mount takes a photo of the sky once per second and tracks changes in the X, Y, and rotation of the stars it can see.
If I just use the raw measurements to track rotation, the output is choppy and noisy, like this:
Guiding with raw rotation measurements:
If I use a lowpass filter, the mount overshoots and never completely settles down. A lower Beta value helps with this, but then the corrections are too slow and error accumulates.
Guiding with lowpass filter:
(In both graphs, purple is the difference between sky and mount rotation, red is the corrective rotations made by the mount.)
A moving average had the same problems as the lowpass filter.
More information about the problem:
For a given area of the sky, the rotation of the stars will be constant. However, we don't know where we are and the measurement of sky rotation is very noisy due to atmospheric jitter, so the algorithm has to work its way towards this initially unknown constant value while guiding.
The mount can move as far as necessary in one second, and has its own control system. So I don't think this is a PID loop control system problem.
It's OK to guide badly (or not at all) for the first 30 seconds or so.
I wrote a small Python program to simulate the problem - might as well include it here, I suppose. This one is currently using a lowpass filter.
#!/usr/bin/env python3
import random
import matplotlib.pyplot as plt
ROTATION_CONSTANT = 0.1
TIME_WINDOW = 300
skyRotation = 0
mountRotation = 0
error = 0
errorList = []
rotationList = []
measurementList = []
smoothData = 0
LPF_Beta = 0.08
for step in range(TIME_WINDOW):
skyRotation += ROTATION_CONSTANT
randomNoise = random.random() - random.random()
rotationMeasurement = skyRotation - mountRotation + randomNoise
# Lowpass filter
smoothData = smoothData - (LPF_Beta * (smoothData - rotationMeasurement));
mountRotation += smoothData
rotationList.append(smoothData)
errorList.append(skyRotation - mountRotation)
measurementList.append(rotationMeasurement)
plt.plot([0, TIME_WINDOW], [ROTATION_CONSTANT, ROTATION_CONSTANT], color='black', linestyle='-', linewidth=2)
plt.plot(errorList, color="purple")
plt.plot(rotationList, color="red")
plt.plot(measurementList, color="blue", alpha=0.2)
plt.axis([0, TIME_WINDOW, -1.5, 1.5])
plt.xlabel("Time (seconds)")
plt.ylabel("Rotation (degrees)")
plt.show()
If anyone knows how to make this converge smoothly (or could recommend relevant learning resources), I would be most grateful. I'd be happy to read up on the topic but so far haven't figured out what to look for!
I would first of all try and do this the easy way by making your control outputs the result of a PID and then tuning the PID as described at e.g. https://robotics.stackexchange.com/questions/167/what-are-good-strategies-for-tuning-pid-loops or from your favourite web search.
Most other approaches require you to have an accurate model of the situation, including the response of the hardware under control to your control inputs, so your next step might be experiments to try and work this out, e.g. by attempting to work out the response to simple test inputs, such as an impulse or a step. Once you have a simulator you can, at the very least, tune parameters for proposed approaches more quickly and safely on the simulator than on the real hardware.
If your simulator is accurate, and if you are seeing more problems in the first 30 seconds than afterwards, I suggest using a Kalman filter to estimate the current error, and then sending in the control that (according to the model that you have constructed) will minimise the mean squared error between the time the control is acted upon and the time of the next observation. Using a Kalman filter will at least take account of the increased observational error when the system starts up.
Warning: the above use of the Kalman filter is myopic, and will fail dramatically in some situations where there is something corresponding to momentum: it will over-correct and end up swinging wildly from one extreme to another. Better use of the Kalman filter results would be to compute a number of control inputs, minimizing the predicted error at the end of this sequence of inputs (e.g. with dynamic programming) and then revisit the problem after the first control input has been executed. In the simple example where I found over-correction you can get stable behavior if you calculate the single control action that minimizes the error if sustained for two time periods, but revisit the problem and recalculate the control action at the end of one time period. YMMV.
If that doesn't work, perhaps it is time to take your accurate simulation, linearize it to get differential equations, and apply classical control theory. If it won't linearize reasonably over the full range, you could try breaking that range down, perhaps using different strategies for large and small errors.
Such (little) experience as I have from control loops suggests that it is extremely important to minimize the delay and jitter in the loop between the sensor sensing and the control actuating. If there is any unnecessary source of jitter or delay between input and control forget the control theory while you get that fixed.

Time-varying band-pass filter in Python

I am trying to solve a problem very similar to the one discussed in this post
I have a broadband signal, which contains a component with time-varying frequency. I need to monitor the phase of this component over time. I am able to track the frequency shifts by (a somewhat brute force method of) peak tracking in the spectrogram. I need to "clean up" the signal around this time varying peak to extract the Hilbert phase (or, alternatively, I need a method of tracking the phase that does not involve the Hilbert transform).
To summarize that previous post: varying the coefficients of a FIR/IIR filter in time causes bad things to happen (it does not just shift the passband, it also completely confuses the filter state in ways that cause surprising transients). However, there probably is some way to adjust filter coefficients in time (probably by jointly modifying the filter coefficients and the filter state in some intelligent way). This is beyond my expertise, but I'd be open to any solutions.
There were two classes of solutions that seem plausible: one is to use a resonator filter (basically a damped harmonic oscillator driven by the signal) with a time-varying frequency. This model is simple enough to avoid surprising filter transients. I will try this -- but resonators have very poor attenuation in the stop band (if they can even be said to have a stop band?). This makes me nervous as I'm not 100% sure how the resonate filters will behave.
The other suggestion was to use a filter bank and smoothly interpolate between various band-pass filtered signals according to the frequency. This approach seems appealing, but I suspect it has some hidden caveats. I imagine that linearly mixing two band-pass filtered signals might not always do what you would expect, and might cause weird things? But, this is not my area of expertise, so if mixing over a filter bank is considered a safe solution (one that has been analyzed and published before), I would use it.
Another potential class of solutions occurs to me, which is to just take the phase from the frequency peak in a sliding short-time Fourier transform (could be windowed, multitaper, etc). If anyone knows any prior literature on this I'd be very interested. Related, would be to take the phase at the frequency power peak from a sliding complex Morlet wavelet transform over the band of interest.
So, I guess, basically I have three classes of solutions in mind.
1. Resonator filters with time-varying frequncy.
2. Using a filter bank, possibly with mixing?
3. Pulling phase from a STFT or CWT, (these can be considered a subset of the filter bank approach)
My supicion is that in (2,3) surprising thing will happen to the phase from time to time, and that in (1) we may not be able to reject as much noise as we'd like. It's not clear to me that this problem even has a perfect solution (uncertainty principle in time-frequency resolution?).
Anyway, if anyone has solved this before, and... even better, if anyone knows any papers that sound directly applicable here, I would be grateful.
Not sure if this will help, but googling "monitor phase of time varying component" resulted in this: Link
Hope that helps.

Categories

Resources