I am trying to plot 96 spectra (a 12x8 grid) using Origin module. Rather than copy-paste the same line and modifying some variables, I would like something more code-effective, but have no idea how to achieve this. This is the code that I need to repeat it:
gl_I= gr1[I]
pI = gl_I.add_plot(wks, coly='I', colx='A', type=202)
where I needs to be looped from 0 to 95.
Couldn't find anything similar to help me solving the problem
This is what I have tried:
for x in range(0, 96):
globals()['gl_%s' % x] = gr1[x]
'p%s' % x = 'gl_%s' % x.add_plot(wks, coly=f'{x+1}', colx='A', type=202)
Related
I'm trying to plot a demand profile for heating energy for a specific building with Python and matplotlib.
But instead of being a single line it looks like this:
Did anyone ever had plotting results like this?
Or does anyone have an idea whats going on here?
The corresponding code fragment is:
for b in list_of_buildings:
print(b.label, b.Q_Heiz_a, b.Q_Heiz_TT, len(b.lp.heating_list))
heating_datalist=[]
for d in range(timesteps):
b.lp.heating_list[d] = b.lp.heating_list[d]*b.Q_Heiz_TT
heating_datalist.append((d, b.lp.heating_list[d]))
xs_heat = [x[0] for x in heating_datalist]
ys_heat = [x[1] for x in heating_datalist]
pyplot.plot(xs_heat, ys_heat, lw=0.5)
pyplot.title(TT)
#get legend entries from list_of_buildings
list_of_entries = []
for b in list_of_buildings:
list_of_entries.append(b.label)
pyplot.legend(list_of_entries)
pyplot.xlabel("[min]")
pyplot.ylabel("[kWh]")
Additional info:
timesteps is a list like [0.00, 0.01, 0.02, ... , 23.59] - the minutes of the day (24*60 values)
b.lp.heating_list is a list containing some float values
b.Q_Heiz_TT is a constant
Based on your information, I have created a minimal example that should reproduce your problem (if not, you may have not explained the problem/parameters in sufficient detail). I'd urge you to create such an example yourself next time, as your question is likely to get ignored without it. The example looks like this:
import numpy as np
import matplotlib.pyplot as plt
N = 24*60
Q_Heiz_TT = 0.5
lp_heating_list = np.random.rand(N)
lp_heating_list = lp_heating_list*Q_Heiz_TT
heating_datalist = []
for d in range(N):
heating_datalist.append((d, lp_heating_list[d]))
xs_heat = [x[0] for x in heating_datalist]
ys_heat = [x[1] for x in heating_datalist]
plt.plot(xs_heat, ys_heat)
plt.show()
What is going in in here? For each d in range(N) (with N = 24*60, i.e. each minute of the day), you plot all values up to and including lp_heating_list[d] versus d. This is because heating_datalist, is appended with the current value of d and corresponding value in lp_heating_list. What you get is 24x60=1440 lines that partially overlap one another. Depending on how your backend is handling things, it may be very slow and start to look messy.
A much better approach would be to simply use
plt.plot(range(timesteps), lp_heating_list)
plt.show()
Which plots only one line, instead of 1440 of them.
I suspect there is an indentation problem in your code.
Try this:
heating_datalist=[]
for d in range(timesteps):
b.lp.heating_list[d] = b.lp.heating_list[d]*b.Q_Heiz_TT
heating_datalist.append((d, b.lp.heating_list[d]))
xs_heat = [x[0] for x in heating_datalist] # <<<<<<<<
ys_heat = [x[1] for x in heating_datalist] # <<<<<<<<
pyplot.plot(xs_heat, ys_heat, lw=0.5) # <<<<<<<<
That way you'll plot only one line per building, which is probably what you want.
Besides, you can use zip to generate x values and y values like this:
xs_heat, ys_heat = zip(*heating_datalist)
This works because zip is it's own inverse!
I'm taking my first course in programming, the course is meant for physics applications and the like. We have an exam coming up, my professor published a practice exam with the following question.
The Maxwell distribution in speed v for an ideal gas consisting of particles of mass m at Kelvin temperature T is given by:
Stackoverflow doesn't use MathJax for formula's, and I can't quite figure out how to write a formula on this site. So, here is a link to WolframAlpha:
where k is Boltzmann's constant, k = 1.3806503 x 10-23 J/K.
Write a Python script called maxwell.py which prints v and f(v) to standard output in two column format. For the particle mass, choose the mass of a proton, m = 1.67262158 10-27 kg. For the gas temperature, choose the temperature at the surface of the sun, T = 5778 K.
Your output should consist of 300 data points, ranging from v = 100 m/s to v = 30,000 m/s in steps of size dv = 100 m/s.
So, here is my attempt at the code.
import math as m
import sys
def f(v):
n = 1.67262158e-27 #kg
k = 1.3806503e-23 #J/K
T = 5778 #Kelvin
return (4*m.pi)*((n/(2*m.pi*k*T))**(3/2))*(v**2)*m.exp((-n*v**2)/(2*k*T))
v = 100 #m/s
for i in range(300):
a = float(f(v))
print (v, a)
v = v + 100
But, my professors solution is:
import numpy as np
def f(v):
m = 1.67262158e-27 # kg
T = 5778. # K
k = 1.3806503e-23 # J/K
return 4.*np.pi * (m/(2.*np.pi*k*T))**1.5 * v**2 * np.exp(-m*v**2/(2.*k*T))
v = np.linspace(100.,30000.,300)
fv = f(v)
vfv = zip(v,fv)
for x in vfv:
print "%5.0f %.3e"%x
# print np.sum(fv*100.)
So, these are very different codes. From what I can tell, they produce the same result. I guess my question is, simply, why is my code incorrect?
Thank you!
EDIT:
So, I asked my professor about it and this was his response.
I think your code is fine. It would run much faster using numpy, but the problem didn't specify that you needed numpy. I might have docked one point for not looping through a list of v's (your variable i doesn't do anything). Also, you should have used v += 100. Almost, these two things together would have been one point out of 10.
1st: Is there any better syntax for doing the range in my code, since my variable i doesn't do anything?
2nd: What is the purpose of v += 100?
Things to be careful about when dealing with numbers is implicit type conversion from floats to ints.
One instance I could figure in your code is that you use (3/2) which evaluates to 1, while the other code uses 1.5 directly.
I have a question about file output in Python.
I was designing a software that reads values from 3 sensors.
Each sensors read 100 values for 1 second, and between each process, I have to print them in file.
time_memory = [k + i/100 for i in range(100)] # dividing 1 second into 100 intervals
x = [100 elements]
y = [100 elements]
z = [100 elements]
Below is the code of writing into the file.
for i in range(self.samples):
self.time_memory[i] = file_time + self.time_index[i]
f.write("{0} {1} {2} {3}\n".format(self.time_memory[i], x[i], y[i], z[i]))
So the result in the file will look like
time_value, x, y, z
time_value, x, y, z
...
However, when the measurement time is over 8000 seconds, the software stops.
I think it is due to so many data the device must proceed since the device I am using is kind of an old one. (I cannot change the device because the computer is connected to NI DAQ device.)
I tried to find many alternative ways to change the code above, but I couldn't find it. Is there anyone who can help me with this problem??
One suggestion would be to write the data in binary mode. This should be faster than the text mode (it also needs less space). Therefore you have to open a file in binary mode like this:
f = open('filename.data', 'wb')
if I have a list of phases for a sinusoidal data points and want to make a plot of time vs phase, the plot goes back to 0 after the data is past 2 pi. Is there a way I could manipulate the data so it continues after 2 pi?
I'm currently using phase = [i % 2*np.pi for i in phase], but this doesn't work.
This being about phase isn't important though. Lets say I had a list of data:
data = [0,1,2,0,1,2,0,1,2]
But I didn't want the data to reset to 0 after 2, so I want the data to be:
data = [0,1,2,3,4,5,6,7,8,9]
There are a few ways.
If you use numpy, the de-facto standard math and array manipulation library for python, then just use numpy.unwrap.
If you want to do it yourself or for some reason not use numpy you can do this
def my_unwrap(phase):
phase_diffs = [phase[i+1] - phase[i] for i in range(len(phases)-1)]
unwrapped_phases = [phase[0]]
previous_phase = phase[0]
for phase_diff in phase_diffs:
if abs(phase_diff) > pi:
phase_diff += 2*pi
previous_phase += phase_diff
unwrapped_phases.append(previous_phase)
return unwrapped
Seems to work in basic test cases.
I am working on some very large scale linear programming problems. (Matrices are currently roughly 1000x1000 and these are the 'mini' ones.)
I thought that I had the program running successfully, only I have realized that I am getting some very unintuitive answers. For example, let's say I were to maximize x+y+z subject to a set of constraints x+y<10 and y+z <5. I run this and get an optimal solution. Then, I run the same equation but with different constraints: x+y<20 and y+z<5. Yet in the second iteration, my maximization decreases!
I have painstakingly gone through and assured myself that the constraints are loading correctly.
Does anyone know what the problem might be?
I found something in the documentation about lpx_check_kkt which seems to tell you when your solution is likely to be correct or high confidence (or low confidence for that matter), but I don't know how to use it.
I made an attempt and got the error message lpx_check_kkt not defined.
I am adding some code as an addendum in hopes that someone can find an error.
The result of this is that it claims an optimal solution has been found. And yet every time I raise an upper bound, it gets less optimal.
I have confirmed that my bounds are going up and not down.
size = 10000000+1
ia = intArray(size)
ja = intArray(size)
ar = doubleArray(size)
prob = glp_create_prob()
glp_set_prob_name(prob, "sample")
glp_set_obj_dir(prob, GLP_MAX)
glp_add_rows(prob, Num_constraints)
for x in range(Num_constraints):
Variables.add_variables(Constraints_for_simplex)
glp_set_row_name(prob, x+1, Variables.variers[x])
glp_set_row_bnds(prob, x+1, GLP_UP, 0, Constraints_for_simplex[x][1])
print 'we set the row_bnd for', x+1,' to ',Constraints_for_simplex[x][1]
glp_add_cols(prob, len(All_Loops))
for x in range(len(All_Loops)):
glp_set_col_name(prob, x+1, "".join(["x",str(x)]))
glp_set_col_bnds(prob,x+1,GLP_LO,0,0)
glp_set_obj_coef(prob,x+1,1)
for x in range(1,len(All_Loops)+1):
z=Constraints_for_simplex[0][0][x-1]
ia[x] = 1; ja[x] = x; ar[x] = z
x=len(All_Loops)+1
while x<Num_constraints + len(All_Loops):
for y in range(2, Num_constraints+1):
z=Constraints_for_simplex[y-1][0][0]
ia[x] = y; ja[x] =1 ; ar[x] = z
x+=1
x=Num_constraints+len(All_Loops)
while x <len(All_Loops)*(Num_constraints-1):
for z in range(2,len(All_Loops)+1):
for y in range(2,Num_constraints+1):
if x<len(All_Loops)*Num_constraints+1:
q = Constraints_for_simplex[y-1][0][z-1]
ia[x] = y ; ja[x]=z; ar[x] = q
x+=1
glp_load_matrix(prob, len(All_Loops)*Num_constraints, ia, ja, ar)
glp_exact(prob,None)
Z = glp_get_obj_val(prob)
Start by solving your problematic instances with different solvers and checking the objective function value. If you can export your model to .mps format (I don't know how to do this with GLPK, sorry), you can upload the mps file to http://www.neos-server.org/neos/solvers/index.html and solve it with several different LP solvers.