How to specify a scalar multiplier for units when using Quantities? - python

The objective is to handle cell densities expressed as "1000/mm^3", i.e. thousands per cubic millimeter.
Currently I do this to handle "1/mm^3":
import quantities as pq
d1 = pq.Quantity(500000, "1/mm**3")
which gives:
array(500000) * 1/mm**3
But what I really need to do is to accept the values with units of "1000/mm^3". This should also be the form in which values are printed. When I try something like:
d1 = pq.Quantity(5, 1000/pq.mm**3)
I get the following error:
ValueError: units must be a scalar Quantity with unit magnitude, got 1000.0 1/mm**3
And if I try:
a = pq.Quantity(500, "1000/mm**3")
The output is:
array(500) * 1/mm**3
i.e. The 1000 just gets ignored.
Any idea how can I fix this? Any workaround?
(The requirement arises from the standard practice followed in the domain.)

One possible solution I have found is to create new units such as this:
k_per_mm3 = pq.UnitQuantity('1000/mm3', 1e3/pq.mm**3, symbol='1000/mm3')
d1 = pq.Quantity(500, k_per_mm3)
Then on printing 'd1', I get:
array(500) * 1000/mm3
which is what I required.
Is this the only way to do this? Or can the same be achieved with existing units (which is preferable)?

Related

How can I convert zeros in Abaqus field output to very low numbers using Python script?

I need to convert all zero values of U2 (displacement in Y direction) to very low but non-zero values so that another output can be later divided by U2 without division by 0 issue.
Here's my attempt to do this:
from abaqusConstants import *
from odbAccess import *
# ***********************************************
odbPath="path_to_odb_file"
stepName="Step-1"
frameNumber=-1 #last frame in the stepName
sourceOutputFieldName='U' #displacement field
newOutputFieldName='U2_no_zeros'
# ************************************************
odb=session.openOdb(name=odbPath,readOnly=FALSE)
step=odb.steps[stepName]
frame=step.frames[frameNumber]
AllInstances=(odb.rootAssembly.instances.keys())
MyInstance=(AllInstances[-1])
instance1=odb.rootAssembly.instances[MyInstance]
sourceField=frame.fieldOutputs[sourceOutputFieldName]
subField=sourceField.getScalarField(componentLabel="U2")
Values=subField.bulkDataBlocks[0].data
NodeLabels=subField.bulkDataBlocks[0].nodeLabels
for value in Values:
if value==0:
value=1e-9
newField=frame.FieldOutput(name=newOutputFieldName, type=SCALAR, description="field")
newField.addData(position=NODAL, instance=instance1, labels=NodeLabels, data=Values)
odb.save()
odb.close()
The script runs without errors and the "U2_no_zeros" field is created but it contains the same values as the original U2 field so the loop doesn't work. In fact, this loop is just my loose idea since I don't know exactly how it should be realized. I was expecting some errors leading to the right solution but for some reason the script runs with no error messages.
You are not changing the data inside the Values variable. And use concatenate method from numpy to shape the data correctly.
And lastly, addData method accepts the data to be in tuple format.
import numpy
Values = numpy.concatenate(subField.bulkDataBlocks[0].data)
# use concatenate to --> [[..],[..],...] to [......]
NodeLabels=subField.bulkDataBlocks[0].nodeLabels
for i,value in enumerate(Values):
if value==0:
Values[i] = 1e-9
newField=frame.FieldOutput(name=newOutputFieldName, type=SCALAR, description="field")
newField.addData(position=NODAL, instance=instance1, labels=NodeLabels, data=tuple(Values))

In `pint`, set per-unit `default_format`

Basically the title. In pint, is there a way to define the default string formatting per dimension or per unit, instead of 'across the board'?
Stated more precisely: I want to format a quantity's numerical value (i.e., magnitude), based on its physical unit.
Here is what I tried, based on the code shown in the docs:
from pint import UnitRegistry, Unit
ureg = UnitRegistry()
# Specific format for km won't stick...
ureg.default_format = ".0f~"
ureg.km.default_format = ".2fP"
ureg.km.default_format # '.0f~'
# ...as also seen here:
dist = 3 * ureg("km")
time = 500 * ureg("s")
print(f"{dist}, {time}")
# wanted: 3.00 kilometer, 500 s
# got: 3 km, 500 s
Especially when dealing with prices, it's practical to be able to set a 2-digit-default, with all other units having different default format.
PS: I know it's possible to set a default formatting on an individual quantity (e.g. dist.default_format = '.2f~'), but that's too specific for my use case. I want all quantities with the unit 'km' to be displayed with 2 decimals.
I have constructed quite the hacky solution:
from pint import UnitRegistry, Quantity
ureg = UnitRegistry()
# Setting specific formats:
formatdict = {ureg.km: '.2fP'} # extend as required
Quantity.default_format = property(lambda self: formatdict.get(self.u, ".0f~"))
# Works:
dist = 3 * ureg("km")
time = 500 * ureg("s")
print(f"{dist}, {time}") # 3.00 kilometer, 500 s
This works, but I'd be surprised if there isn't a better solution.
EDIT
It only works in a limited sense. ureg.default_format gets changed as well, which prohibits its use in e.g. a pandas.DataFrame:
ureg.default_format # <property at 0x21148ed5ae0>

Convert tradingview's pine script to python in VScode

I want to convert the following Pine script to python to calculate the value for vwap1 variable without plotting the results, I just want to calculate the value for vwap1:
...
wapScore(pds) =>
mean = sum(volume*close,pds)/sum(volume,pds)
vwapsd = sqrt(sma(pow(close-mean, 2), pds) )
(close-mean)/vwapsd
vwap1 = input(48)
plot(vwapScore(vwap1),title="ZVWAP2-48",color=#35e8ff, linewidth=2,transp=0.75)
...
I have tried the following:
...
def calculate_SMA(ser, days):
sma = ser.rolling(window=days).mean()
return sma
def calculate_Zscore(pds, volume, close):
mean = sum(volume * close, pds) / sum(volume, pds)
vwapsd = np.sqrt(calculate_SMA(pow(close - mean, 2), pds))
return (close - mean) / vwapsd
...
And I am using calculate_Zscore function to calculate the value and add it to pandas dataframe but it gives me different values rather than the values on trading view
I just wanted to comment ...but my reputation cannot allow me :)
Surely the sum of pine script (tradingview) has a different type signature than the python sum.
In particular the second parameter has a totally different meaning.
In pine script sum(x,y) tells you the sliding sum of last y values of x (sum of x for y bars back).
In python sum(x,y) sum the iterable x and if y, the second parameter has passed (optional), this value is added to the sum of items of the iterable.So if your sum(x) == 4.5 then sum(x,10) == 14.5
So your code surely need to be changed at least on the use of this method
Hoping to be helpful

PYOMO Constraints - setting constraints over indexed variables

I have been trying to get into python optimization, and I have found that pyomo is probably the way to go; I had some experience with GUROBI as a student, but of course that is no longer possible, so I have to look into the open source options.
I basically want to perform an non-linear mixed integer problem in which I will minimized a certain ratio. The problem itself is setting up a power purchase agreement (PPA) in a renewable energy scenario. Depending on the electricity generated, you will have to either buy or sell electricity acording to the PPA.
The only starting data is the generation; the PPA is the main decision variable, but I will need others. "buy", "sell", "b1" and "b2" are unknown without the PPA value. These are the equations:
Equations that rule the problem (by hand).
Using pyomo, I was trying to set up the problem as:
# Dataframe with my Generation information:
January = Data['Full_Data'][(Data['Full_Data']['Month'] == 1) & (Data['Full_Data']['Year'] == 2011)]
Gen = January['Producible (MWh)']
Time = len(Generacion)
M=100
# Model variables and definition:
m = ConcreteModel()
m.IDX = range(time)
m.PPA = Var(initialize = 2.0, bounds =(1,7))
m.compra = Var(m.IDX, bounds = (0, None))
m.venta = Var(m.IDX, bounds = (0, None))
m.b1 = Var(m.IDX, within = Binary)
m.b2 = Var(m.IDX, within = Binary)
And then, the constraint; only the first one, as I was already getting errors:
m.b1_rule = Constraint(
expr = (((Gen[i] - PPA)/M for i in m.IDX) <= m.b1[i])
)
which gives me the error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-5d5f5584ebca> in <module>
1 m.b1_rule = Constraint(
----> 2 expr = (((Generacion[i] - PPA)/M for i in m.IDX) <= m.b1[i])
3 )
pyomo\core\expr\numvalue.pyx in pyomo.core.expr.numvalue.NumericValue.__ge__()
pyomo\core\expr\logical_expr.pyx in pyomo.core.expr.logical_expr._generate_relational_expression()
AttributeError: 'generator' object has no attribute 'is_expression_type'
I honestly have no idea what this means. I feel like this should be a simple problem, but I am strugling with the syntax. I basically have to apply a constraint to each individual data from "Generation", there is no sum involved; all constraints are 1-to-1 contraints set so that the physical energy requirements make sense.
How do I set up the constraints like this?
Thank you very much
You have a couple things to fix. First, the error you are getting is because you have "extra parenthesis" around an expression that python is trying to convert to a generator. So, step 1 is to remove the outer parenthesis, but that will not solve your issue.
You said you want to generate this constraint "for each" value of your index. Any time you want to generate copies of a constraint "for each" you will need to either do that by making a constraint list and adding to it with some kind of loop, or use a function-rule combination. There are examples of each in the pyomo documentation and plenty on this site (I have posted a ton if you look at some of my posts.) I would suggest the function-rule combo and you should end up with something like:
def my_constr(m, i):
return m.Gen[i] - m.PPA <= m.b1[i] * M
m.C1 = Constraint(m.IDX, rule=my_constr)

How to create stiffness (nodal force/displacement) contour plot using Python script in Abaqus?

I would like to create a contour plot of stiffness from field output results. For this purpose I have to divide NFORC2 (Y component of nodal force vector) by U2 (displacement in Y direction).
I tried to use Create Field Output --> From Fields where we can define our own output variables. Scalars can be extracted from vectors using getScalarField() but Abaqus shows an error because NFORC is whole element variable while U is nodal variable.
Thus it seems that the only way is to write a script that will convert NFORC. I've found a code that can save a variable as element nodal. It was designed for stress which is calculated at integration points so I'm not sure that it will convert NFORC properly but it's the only option I have so far. It saves some output so I assume that it's correct.
However, Abaqus still doesn't let me divide this new variable by U2. It seems that I have to convert it to unique nodal first. For this I've found another Python script and added it to the first one. Apparently I did something wrong because the error shows up.
Here are the two scripts combined into one:
from abaqusConstants import *
from odbAccess import *
import numpy as np
# ***********************************************
odbPath="path_to_odb_file"
stepName="Step-1"
frameNumber=-1 #last frame in the stepName
sourceOutputFieldName='NFORC2' #element forces field
newOutputFieldName='NFORC2_at_NODES_UNIQUE'
# ************************************************
odb=session.openOdb(name=odbPath,readOnly=FALSE)
step=odb.steps[stepName]
frame=step.frames[frameNumber]
sourceField=frame.fieldOutputs[sourceOutputFieldName]
subField=sourceField.getSubset(position=ELEMENT_NODAL)
Values=subField.bulkDataBlocks[0].data
NodeLabels=subField.bulkDataBlocks[0].nodeLabels
NodeLabels_unique, unq_idx = np.unique(NodeLabels, return_inverse=True)
Values_Averaged=np.zeros((NodeLabels_unique.size,Values.shape[1]))
unq_counts = np.bincount(unq_idx)
for i in xrange(0,Values.shape[1]):
ValuesTemp = [item[i] for item in Values]
unq_sum = np.bincount(unq_idx, weights=ValuesTemp)
Values_Averaged[:,i] = unq_sum / unq_counts
newField=frame.FieldOutput(name=newOutputFieldName, field=Values_Averaged)
odb.save()
odb.close()
The error I get points to the following line of code:
newField=frame.FieldOutput(name=newOutputFieldName, field=Values_Averaged)
The error message is: TypeError: keyword error on field
Do you know what may be causing the error and how the script should look like?
UPDATE: I finally got this working. Here's the fixed script (odb has to be opened without "Read only" option):
from abaqusConstants import *
from odbAccess import *
import numpy as np
# ***********************************************
odbPath="path_to_odb_file"
stepName="Step-1"
frameNumber=-1 #last frame in the stepName
sourceOutputFieldName='NFORC2' #element forces field
newOutputFieldName='NFORC2_at_NODES_UNIQUE'
# ************************************************
odb=session.openOdb(name=odbPath,readOnly=FALSE)
step=odb.steps[stepName]
frame=step.frames[frameNumber]
AllInstances=(odb.rootAssembly.instances.keys())
MyInstance=(AllInstances[-1])
instance1=odb.rootAssembly.instances[MyInstance]
sourceField=frame.fieldOutputs[sourceOutputFieldName]
subField=sourceField.getSubset(position=ELEMENT_NODAL)
Values=subField.bulkDataBlocks[0].data
NodeLabels=subField.bulkDataBlocks[0].nodeLabels
NodeLabels_unique, unq_idx = np.unique(NodeLabels, return_inverse=True)
Values_Averaged=np.zeros((NodeLabels_unique.size,Values.shape[1]))
unq_counts = np.bincount(unq_idx)
for i in xrange(0,Values.shape[1]):
ValuesTemp = [item[i] for item in Values]
unq_sum = np.bincount(unq_idx, weights=ValuesTemp)
Values_Averaged[:,i] = unq_sum / unq_counts
newField=frame.FieldOutput(name=newOutputFieldName,type=SCALAR, description="field")
newField.addData(position=NODAL, instance=instance1, labels=NodeLabels_unique, data=Values_Averaged)
odb.save()
odb.close()
The new "NFORC2_at_NODES_UNIQUE" field is correctly created. The only problem now is that when I divide this field by U2 displacement using Create Field Output --> From Fields, I get division by zero error. It's understandable because displacement can be zero in some locations. But what can be done in this case? Perhaps I should convert zeros to some very low values in my script. How can I do this?
Or maybe a better way would be to force the script to return 0 for each encountered division by zero.

Categories

Resources