Calculating PV system output from Daily global solar exposure - python

I would like to calculate daily PV system energy output in Python and PVLIB from Daily global solar exposure data so I can monitor the performance of a solar PV system i.e. I want to be able to compare the actual energy output of the PV system with a value calculated from the solar exposure data to make sure it is performing well.
Here is an example of the data:
http://www.bom.gov.au/jsp/ncc/cdio/wData/wdata?p_nccObsCode=193&p_display_type=dailyDataFile&p_stn_num=056037&p_startYear=
I've been playing around with PVlib but I can't work out which functions to use.
I was hoping I would be able to enter in all the parameters of the PV system into a function along with the solar exposure data and it would give me the predicted energy output.

Well it really depends on how precise you want to be. It's a full time job to design a good algorithm to estimate produced power from only the Global Horizontal Irradiance (GHI - the data you gave).
It depends on the geographical position, the temperature, the angle of your panels, their orientation, their type, the time of day/year, even the electrical installation etc...
However if you're not looking for too much precision, a simple model would be :
Find an estimation of the efficient of you panels (say 10%)
Get the area coverage of your panels (in m2), projected horizontally.
Then simply compute Power = Efficiency * GHI * Area
You could then refine the model accounting for the angle between you panels and the sun, for example using pvlib.solarposition.get_solarposition().
A few references:
Computing global tilted irradiance from GHI
PvLib's cool ModelChain to model precisely you installation
A good overview of most factors impacting you efficiency
PvLib's doc is pretty good, have a look at the PvSystem, ModelChain etc...
PS : not enough reputation to add a comment, though I know it doesn't really deserve an answer :/

Related

Model for predicting temperature data of fridge

I set up a sensor which measures temperature data every 3 seconds. I collected the data for 3 days and have 60.000 rows in my csv export. Now I would like to forecast the next few days. When looking at the data you can already see a "seasonality" which displays the fridges heating and cooling cycle so I guess it shouldn't be too difficult to predict. I am not really sure if my data is too granular and if I should do some kind of undersampling. I thought about using a seasonal ARIMA model but I am having difficulties with picking parameters. As the seasonality in the data is pretty obious is there maybe a model that fits better? Please bear with me I'm pretty new to machine learning.
When the goal is to forecast rising temperatures, you can forecast the lower and upper peaks, i.e., their hight and distances. Assuming (simplified model) that the temperature change in between is linear we can, model each complete peak starting from a first lower peak of the temperature curve to the next upper peak down to next lower peak. So a complete peak can be seen as triangle which we easily integrate (calculate its area + the area of the rectangle below of it). The estimation can now be done by a integrating a number of complete peaks we have already measured. By repeating this procedure, we can do now a linear regression on the average temperatures and alert when the slope is above a defined threshold.
As this only tackles a certain kind of errors, one can do the same for the average distances between the upper peaks and the also for the lower peaks. I.e., take the times between them for a certain periode, fit a curve (linear regression can possibly be sufficient) and alert when the slope of the curve is indicating too long distances.
It's mission impossible. If fridge work without interference, then graph always looks the same. The change can be caused, for example, by opening a door, a breakdown, a major change in external conditions. But you cannot predict such events. Instead, you can try to warn about the possibility of problems in the near future, for example, based on a constant increase in average temperature. This situation may indicate a leak in the cooling system.
By the way, have you considered logging the temperature every 3 seconds? This is usually unjustified, because it is physically impossible for the temperature to change to a measurable degree in such an interval. Our team usually sets the login interval to 30 or 60 seconds in such cases. Sometimes even more. Depending on the size of the chamber, the way the air is circulated, the ratio of volume to power of the refrigeration unit, etc.

How can I find out if a satellite has performed maneuver?

I have been recently acquainted with orbital mechanics and am trying to do some analysis on the subject. Since I don't have subject matter expertise, I was at a crossroads with trying to decide that how would one determine if a satellite has performed maneuver/rendezvous operation given the historical TLE data of that satellite from which we extract the orbital elements. To drill down further, I am approaching to the problem like this:
I take my satellite of interest and collect the historical TLE data
for it.
Once, I have the data, I extract and calculate all the orbital
parameters from the TLE.
From the list of orbital parameters, I choose a subset of those
parameters and calculate long term standardized anomalies for each
of them.
Once I have the anomalies, I filter out those dates where any one
parameter has anomalies greater than 1.5 or less than -1.5.
But the deal is, I am not too sure of my subset. As of now, I have Inclination, RAAN, Argument of Perigee and Longitude.
Is there any other factor that I should add or remove from this subset in order to nail this analysis the right way? Or is there altogether any other approach that I can use?
What I'm interested in, is to find out the days when a satellite has performed maneuvers.
You should add major and minor semi axis sizes (min and max altitude). Those changes after any burns along trajectory or perpendicular to it and decrease from friction for too low orbits.
Analyzing that can possibly hint what kind of maneuver was performed. Also changing those is usually done on the opposite side of the orbit so once you find a bump in periaxis or apoaxis the burn most likely ocured half orbit before reaching it.
Another stuff I would detect was speed. Compute local speed as derivation of consequent data points distance/time) and compare that with Kepler's equation. If they are not matching it means some kind of burn or collision or ejection ocured. and from the difference you can also detect what has been done.
For more info see:
solving Kepler`s equation
Is it possible to make realistic n-body solar system simulation in matter of size and mass?

How to Identify Each Components from Audio Signal?

I have some audio files recorded from wind turbines, and I'm trying to do anomaly detection. The general idea is if a blade has a fault (e.g. cracking), the sound of this blade will differ with other two blades, so we can basically find a way to extract each blade's sound signal and compare the similarity / distance between them, if one of this signals has a significant difference, we can say the turbine is going to fail.
I only have some faulty samples, labels are lacking.
However, there seems to be no one doing this kind of work, and I met lots of troubles while attempting.
I've tried using stft to convert the signal to power spectrum, and some spikes show. How to identify each blade from the raw data? (Some related work use AutoEncoders to detect anomaly from audio, but in this task we want to use some similarity-based method.)
Anyone has good idea? Have some related work / paper to recommend?
Well...
If your shaft is rotating at, say 1200 RPM or 20 Hz, then all the significant sound produced by that rotation should be at harmonics of 20Hz.
If the turbine has 3 perfect blades, however, then it will be in exactly the same configuration 3 times for every rotation, so all of the sound produced by the rotation should be confined to multiples of 60 Hz.
Energy at the other harmonics of 20 Hz -- 20, 40, 80, 100, etc. -- that is above the noise floor would generally result from differences between the blades.
This of course ignores noise from other sources that are also synchronized to the shaft, which can mess up the analysis.
Assuming that the audio you got is from a location where one can hear individual blades as they pass by, there are two subproblems:
1) Estimate each blade position, and extract the audio for each blade.
2) Compare the signal from each blade to eachother. Determine if one of them is different enough to be considered an anomaly
Estimating the blade position can be done with a sensor that detects the rotation directly. For example based on the magnetic field of the generator. Ideally you would have this kind known-good sensor data, at least while developing your system. It may be possible to estimate using only audio, using some sort of Periodicity Detection. Autocorrelation is a commonly used technique for that.
To detect differences between blades, you can try to use a standard distance function on a standard feature description, like Euclidean on MFCC. You will still need to have some samples for both known faulty examples and known good/acceptable examples, to evaluate your solution.
There is however a risk that this will not be good enough. Then try to compute some better features as basis for the distance computation. Perhaps using an AutoEncoder. You can also try some sort of Similarity Learning.
If you have a good amount of both good and faulty data, you may be able to use a triplet loss setup to learn the similarity metric. Feed in data for two good blades as objects that should be similar, and the known-bad as something that should be dissimilar.

signal disaggregation in real time

i have a streamed power data in real time coming from my electric meter, and when i see the load with my eyes i can tell which kind of appliance is on.
Currently i'm using a sliding window of ten points and calculating the standard deviation to detect appliances turning on or off. The aim is to know how much each appliance is consuming by an integral calculation. I need help to perform a signal disaggregation in real Time os i can calculate the inegral of each appliance and avoid having cross calculated consumption values that can happen like in this img
Thx in advance for any help you could provide!
If it's just about distinguish between on and off state, naive bayes classification might do the work (https://machinelearningmastery.com/naive-bayes-classifier-scratch-python/) there are several interesting links at the end.
If you want to disaggregate various consumers, an artificial neural network might be a possible solution using TensorFlow https://www.tensorflow.org/tutorials/
An issue here is to generate the labeled training data from scratch.
Performing a fast fourier analysis is used e.g. for detection of hifi equipment - as each device has a specific spectrum.

How to Calculate power spectral density using USRP data?

I wanted to plot a graph between Average power spectral density(in dbm) and the frequency (2.4 GHZ to 2.5 GHZ).
The basic procedure i used earlier for power vs freq plot was to store the data generated by "usrp_specteum_sense.py" for some time period and then taking average.
Can i calculate PSD from the power used in "usrp_spectrum_sense.py"?
Is there any way to calculate PSD directly from usrp data?
Is there any other apporch which can be used to calculate PSD using USRP for desired range of frquency??
PS: I recently found out about the psd() in matplotlib, can it be use to solve my problem??
I wasn't 100% sure whether or not to mark this question a duplicate of Retrieve data from USRP N210 device ; however, since the poster of that question was very confused and so was his question, let's answer this in a concise way:
What an SDR device like the USRP does is give you digital samples. What these are is nothing more or less than what the ADC (Analog-to-Digital converter) makes out of the voltages it sees. Then, those numbers are subject to a DSP chain that does frequency shifting, decimation and appropriate filtering. In other words, the discrete complex signal's envelope coming from the USRP should be proportional to the voltages observed by the ADC. Thanks to physics, that means that the magnitude square of these samples should be proportional to the signal power as seen by the ADC.
Thus, the values you get are "dBFS" (dB relative to Full Scale), which is an arbitrary measure relative to the maximum value the signal processing chain might produce.
Now, notice two things:
As seen by the ADC is important. Prior to the ADC there's
an unknown antenna with a) an unknown efficiency and b) unknown radiation pattern illuminated from an unknown direction,
connected to a cable that might or might not perfectly match the antennas impedance, and that might or might not perfectly match the USRP's RF front-end's impedance,
potentially a bank of preselection filters with different attenuations,
a low-noise frontend amplifier, depending on the device/daughterboard with adjustable gain, with non-perfectly flat gain over frequency
a mixer with frequency-dependent gain,
baseband and/or IF gain stages and attenuators, adjustable,
baseband filters, might be adjustable,
component variances in PCBs, connectors, passives and active components, temperature-dependent gain and intermodulation, as well as
ADC non-linearity, frequency-dependent behaviour.
proportional is important here, since after sampling, there will be
I/Q imbalance correction,
DC/LO leakage cancellation,
anti-aliasing filtering prior to
decimation,
and bit-width and numerical type changing operations.
All in all, the USRPs are not calibrated measurement devices. They are pretty nice, and if chose the right one for your specific application, you might just need to calibrate once with a known external power source feeding exactly your system from antenna to sampling rate coming out at the end, at exactly the frequency you want to observe. After knowing "ok, when I feed in x dBm of power, I see y dBFS, so there's this factor (x-y) dB between dBFS", you now have calibrated your device for exactly one configuration consisting of
hardware models and individual units used, including antennas and cables,
center frequency,
gain,
filter settings,
decimation/sampling rate
Note that doing such calibrations, especially in the 2.4 GHz ISM band will require a "RF silent" room – it'll be hard to find an office or lab with no 2.4 GHz devices these days, and the reason why these frequencies are free for usage is that microwave ovens interfere; and then there's the fact that these frequencies tend to diffract and reflect on building structures, PC cases, furniture with metal parts... In other words: get access to an anechoic chamber, a reference transmit antenna and transmit power source, and do the whole antenna system calibration dance that results in a directivity diagram normally, but instead generate a "digital value relative to transmit power" measurement. Whether or not that measurement is really representative for how you'll be using your USRP in a lab environment is very much up for your consideration.
That is a problem of any microwave equipment, not only the USRPs – RF propagation isn't easy to predict in complex environments, and the power characteristics of a receiving system isn't determined by a single component, but by the system as a whole in exactly its intended operational environment. Thus, calibration must require you either know your antenna, cable, measurement frontend, digitizer and DSP exactly and can do the math including error margins, or that you calibrate the system as a whole, and change as little as possible afterwards.
So: No. No Matlab function in this world can give meaning to numbers that isn't in these numbers – for absolute power, you'll need to calibrate against a reference.
Another word on linearity: A USRP's analog hardware at full gain is pretty sensitive – so much sensitive that operating e.g. a WiFi device in the same room would be like screaming in its ear, blanking out weaker signals, and driving the analog signal chain into non-linearity. In that case, not only do the voltages observed by the ADC lose their linear relation to the voltages inserted at the antenna port, but also, and that is usually worse, amplifiers become mixers, so unwanted intermodulation introduces energy in spectral places where there was none. So make sure you operate your device in a place where you make the most of your signal's dynamic range without running into nonlinearities.

Categories

Resources