I wanted to plot a graph between Average power spectral density(in dbm) and the frequency (2.4 GHZ to 2.5 GHZ).
The basic procedure i used earlier for power vs freq plot was to store the data generated by "usrp_specteum_sense.py" for some time period and then taking average.
Can i calculate PSD from the power used in "usrp_spectrum_sense.py"?
Is there any way to calculate PSD directly from usrp data?
Is there any other apporch which can be used to calculate PSD using USRP for desired range of frquency??
PS: I recently found out about the psd() in matplotlib, can it be use to solve my problem??
I wasn't 100% sure whether or not to mark this question a duplicate of Retrieve data from USRP N210 device ; however, since the poster of that question was very confused and so was his question, let's answer this in a concise way:
What an SDR device like the USRP does is give you digital samples. What these are is nothing more or less than what the ADC (Analog-to-Digital converter) makes out of the voltages it sees. Then, those numbers are subject to a DSP chain that does frequency shifting, decimation and appropriate filtering. In other words, the discrete complex signal's envelope coming from the USRP should be proportional to the voltages observed by the ADC. Thanks to physics, that means that the magnitude square of these samples should be proportional to the signal power as seen by the ADC.
Thus, the values you get are "dBFS" (dB relative to Full Scale), which is an arbitrary measure relative to the maximum value the signal processing chain might produce.
Now, notice two things:
As seen by the ADC is important. Prior to the ADC there's
an unknown antenna with a) an unknown efficiency and b) unknown radiation pattern illuminated from an unknown direction,
connected to a cable that might or might not perfectly match the antennas impedance, and that might or might not perfectly match the USRP's RF front-end's impedance,
potentially a bank of preselection filters with different attenuations,
a low-noise frontend amplifier, depending on the device/daughterboard with adjustable gain, with non-perfectly flat gain over frequency
a mixer with frequency-dependent gain,
baseband and/or IF gain stages and attenuators, adjustable,
baseband filters, might be adjustable,
component variances in PCBs, connectors, passives and active components, temperature-dependent gain and intermodulation, as well as
ADC non-linearity, frequency-dependent behaviour.
proportional is important here, since after sampling, there will be
I/Q imbalance correction,
DC/LO leakage cancellation,
anti-aliasing filtering prior to
decimation,
and bit-width and numerical type changing operations.
All in all, the USRPs are not calibrated measurement devices. They are pretty nice, and if chose the right one for your specific application, you might just need to calibrate once with a known external power source feeding exactly your system from antenna to sampling rate coming out at the end, at exactly the frequency you want to observe. After knowing "ok, when I feed in x dBm of power, I see y dBFS, so there's this factor (x-y) dB between dBFS", you now have calibrated your device for exactly one configuration consisting of
hardware models and individual units used, including antennas and cables,
center frequency,
gain,
filter settings,
decimation/sampling rate
Note that doing such calibrations, especially in the 2.4 GHz ISM band will require a "RF silent" room – it'll be hard to find an office or lab with no 2.4 GHz devices these days, and the reason why these frequencies are free for usage is that microwave ovens interfere; and then there's the fact that these frequencies tend to diffract and reflect on building structures, PC cases, furniture with metal parts... In other words: get access to an anechoic chamber, a reference transmit antenna and transmit power source, and do the whole antenna system calibration dance that results in a directivity diagram normally, but instead generate a "digital value relative to transmit power" measurement. Whether or not that measurement is really representative for how you'll be using your USRP in a lab environment is very much up for your consideration.
That is a problem of any microwave equipment, not only the USRPs – RF propagation isn't easy to predict in complex environments, and the power characteristics of a receiving system isn't determined by a single component, but by the system as a whole in exactly its intended operational environment. Thus, calibration must require you either know your antenna, cable, measurement frontend, digitizer and DSP exactly and can do the math including error margins, or that you calibrate the system as a whole, and change as little as possible afterwards.
So: No. No Matlab function in this world can give meaning to numbers that isn't in these numbers – for absolute power, you'll need to calibrate against a reference.
Another word on linearity: A USRP's analog hardware at full gain is pretty sensitive – so much sensitive that operating e.g. a WiFi device in the same room would be like screaming in its ear, blanking out weaker signals, and driving the analog signal chain into non-linearity. In that case, not only do the voltages observed by the ADC lose their linear relation to the voltages inserted at the antenna port, but also, and that is usually worse, amplifiers become mixers, so unwanted intermodulation introduces energy in spectral places where there was none. So make sure you operate your device in a place where you make the most of your signal's dynamic range without running into nonlinearities.
Related
I would like to calculate daily PV system energy output in Python and PVLIB from Daily global solar exposure data so I can monitor the performance of a solar PV system i.e. I want to be able to compare the actual energy output of the PV system with a value calculated from the solar exposure data to make sure it is performing well.
Here is an example of the data:
http://www.bom.gov.au/jsp/ncc/cdio/wData/wdata?p_nccObsCode=193&p_display_type=dailyDataFile&p_stn_num=056037&p_startYear=
I've been playing around with PVlib but I can't work out which functions to use.
I was hoping I would be able to enter in all the parameters of the PV system into a function along with the solar exposure data and it would give me the predicted energy output.
Well it really depends on how precise you want to be. It's a full time job to design a good algorithm to estimate produced power from only the Global Horizontal Irradiance (GHI - the data you gave).
It depends on the geographical position, the temperature, the angle of your panels, their orientation, their type, the time of day/year, even the electrical installation etc...
However if you're not looking for too much precision, a simple model would be :
Find an estimation of the efficient of you panels (say 10%)
Get the area coverage of your panels (in m2), projected horizontally.
Then simply compute Power = Efficiency * GHI * Area
You could then refine the model accounting for the angle between you panels and the sun, for example using pvlib.solarposition.get_solarposition().
A few references:
Computing global tilted irradiance from GHI
PvLib's cool ModelChain to model precisely you installation
A good overview of most factors impacting you efficiency
PvLib's doc is pretty good, have a look at the PvSystem, ModelChain etc...
PS : not enough reputation to add a comment, though I know it doesn't really deserve an answer :/
I have some audio files recorded from wind turbines, and I'm trying to do anomaly detection. The general idea is if a blade has a fault (e.g. cracking), the sound of this blade will differ with other two blades, so we can basically find a way to extract each blade's sound signal and compare the similarity / distance between them, if one of this signals has a significant difference, we can say the turbine is going to fail.
I only have some faulty samples, labels are lacking.
However, there seems to be no one doing this kind of work, and I met lots of troubles while attempting.
I've tried using stft to convert the signal to power spectrum, and some spikes show. How to identify each blade from the raw data? (Some related work use AutoEncoders to detect anomaly from audio, but in this task we want to use some similarity-based method.)
Anyone has good idea? Have some related work / paper to recommend?
Well...
If your shaft is rotating at, say 1200 RPM or 20 Hz, then all the significant sound produced by that rotation should be at harmonics of 20Hz.
If the turbine has 3 perfect blades, however, then it will be in exactly the same configuration 3 times for every rotation, so all of the sound produced by the rotation should be confined to multiples of 60 Hz.
Energy at the other harmonics of 20 Hz -- 20, 40, 80, 100, etc. -- that is above the noise floor would generally result from differences between the blades.
This of course ignores noise from other sources that are also synchronized to the shaft, which can mess up the analysis.
Assuming that the audio you got is from a location where one can hear individual blades as they pass by, there are two subproblems:
1) Estimate each blade position, and extract the audio for each blade.
2) Compare the signal from each blade to eachother. Determine if one of them is different enough to be considered an anomaly
Estimating the blade position can be done with a sensor that detects the rotation directly. For example based on the magnetic field of the generator. Ideally you would have this kind known-good sensor data, at least while developing your system. It may be possible to estimate using only audio, using some sort of Periodicity Detection. Autocorrelation is a commonly used technique for that.
To detect differences between blades, you can try to use a standard distance function on a standard feature description, like Euclidean on MFCC. You will still need to have some samples for both known faulty examples and known good/acceptable examples, to evaluate your solution.
There is however a risk that this will not be good enough. Then try to compute some better features as basis for the distance computation. Perhaps using an AutoEncoder. You can also try some sort of Similarity Learning.
If you have a good amount of both good and faulty data, you may be able to use a triplet loss setup to learn the similarity metric. Feed in data for two good blades as objects that should be similar, and the known-bad as something that should be dissimilar.
I’ve made quite a bad mistake in ca. 14 EEG recordings – I recorded at 10uV resolution # 5000 Hz instead of 0.1 uV # 500 Hz. I’m conducting an ERP experiment, and the signals of interest are on the order of ~5 uV. I’m wondering if there is any way to up-sample the voltage given I have way more time series data points than I need..? Some sort of interpolation?
I’ve seen a number of posts on up-sampling from say 500 hz to 1000 hz, but not sure if the principal is the same?
This represents about 42 hours of recording time and I’m anxious to know if I can recover any usable data from these recordings, or if I have to try to get participants back in (there is a treatment enrollment deadline which means I can’t simply acquire more data).
Thanks very much,
pk
A way how to increase the resolution of your signal is to use signal averaging. That means you apply a sliding window filter. In your case, this would be a filter of length 10 since your signal is oversampled by a factor of 10. You can't use a longer filter than of length 10 because you actually might filter out information you are interested in. Now, here is the catch. Whatever improvement this averaging is going to give you (depending on if the noise is zero mean Gaussian distributed or not) won't be enough to get you to your desired resolution of 0.1V. In my opinion you will need to redo the experiment. Here is a post you might find interesting https://dsp.stackexchange.com/questions/48205/why-does-signal-averaging-reduces-noise-levels-by-more-than-sqrtn?rq=1
You will have to redo the experiments.
Though you do have sufficient temporal resolution, this will in no way help to increase you voltage resolution. Since the recording equipment performs analog to digital conversion quantization with the specified resolution, meaning you will have an uncertainty of half your resolution (10uV/2 = 5 uV). This is in range of the signals that you want to measure, so the signal you want to see will maybe jump across a couple different levels (perhaps 2 based on the info you provide).
i have a streamed power data in real time coming from my electric meter, and when i see the load with my eyes i can tell which kind of appliance is on.
Currently i'm using a sliding window of ten points and calculating the standard deviation to detect appliances turning on or off. The aim is to know how much each appliance is consuming by an integral calculation. I need help to perform a signal disaggregation in real Time os i can calculate the inegral of each appliance and avoid having cross calculated consumption values that can happen like in this img
Thx in advance for any help you could provide!
If it's just about distinguish between on and off state, naive bayes classification might do the work (https://machinelearningmastery.com/naive-bayes-classifier-scratch-python/) there are several interesting links at the end.
If you want to disaggregate various consumers, an artificial neural network might be a possible solution using TensorFlow https://www.tensorflow.org/tutorials/
An issue here is to generate the labeled training data from scratch.
Performing a fast fourier analysis is used e.g. for detection of hifi equipment - as each device has a specific spectrum.
Query:
I want to estimate the trajectory of a person wearing an IMU between point a and point b. I know the exact location of point a and point b in an x,y,z space and the time it takes the person to walk between the points.
Is it possible to reconstruct the trajectory of the person moving from point a to point b using the data from an IMU and the time?
This question is too broad for SO. You could write a PhD thesis answering it, and I know people who have.
However, yes, it is theoretically possible.
However, there are a few things you'll have to deal with:
Your system is going to discretize time on some level. The result is that your estimate of position will be non-smooth. Increasing sampling rates is one way to address this, but this frequently increases the noise of the measurement.
Possible paths are non-unique. Knowing the time it takes to travel from a-b constrains slightly the information from the IMUs, but you are still left with an infinite family of possible routes between the two. Since you mention that you're considering a person walking between two points with z-components, perhaps you can constrain the route using knowledge of topography and roads?
IMUs function by integrating accelerations to velocities and velocities to positions. If the accelerations have measurement errors, and they always do, then the error in your estimate of the position will grow over time. The longer you run the system for, the more the results will diverge. However, if you're able to use roads/topography as a constraint, you may be able to restart the integration from known points in space; that is, if you can detect 90 degree turns on a street grid, each turn gives you the opportunity to tie the integrator back to a feasible initial condition.
Given the above, perhaps the most important question you have to ask yourself is how much error you can tolerate in your path reconstruction. Low-error estimates are going to require better (i.e. more expensive) sensors, higher sampling rates, and higher-order integrators.