I'm having some issues with SymPy's current assumptions.
Look at this thread. One of the hints said to use the assume module (reference here).
I tried doing the following computation $\lim_{x \to \infty} \frac{\ln{x}}{x^k}$. I want to evaluate this limit for $k >0$.
So I tried this:
with assuming(k>0):
limit((log(x))/(x**k),x,oo)
I also tried this:
eval(limit((log(x))/(x**k),x,oo),k>0)
But regardless, I get this error:
NotImplementedError: Result depends on the sign of -sign(k)
In the case of
with assume(k>0):
limit((log(x))/(x**k),x,oo)
I get this error:
TypeError: 'module' object is not callable
Any ideas what I'm doing wrong?
This seems to work. The first answer in the thread that you linked says that "The assumption system of SymPy is kind of a mess right now". I'm not sure if that has changed since then.
k = Symbol('k', positive=True)
print limit((log(x))/(x**k),x,oo)
Related
I'm attempting to build an XML document out of request.POST data in a Django app:
ElementTree.Element("occasion", text=request.POST["occasion"])
PyCharm is giving me an error on the text parameter saying Expected type 'str', got 'Type[QueryDict]' instead. I only bring up PyCharm because I know its type checker can be overzealous sometimes. However, I haven't been able to find anything about this issue specifically.
Am I doing something wrong? Or should I try to silence this error?
Assuming you're not posting anything unusual, like json, request.POST['occasion'] should return a string, either the field 'occasion' or the last value of the list submitted with that name (or an error, if empty. Use request.POST.get('occasion') to avoid).
There are apparently some httprequest related issues with pycharm, but the way to doublecheck if this is happening here would be to print out and/or type check request.POST['occasion'] prior to that line to make sure of what it returns, eg:
occasion = request.POST['occasion']
print(type(occasion), occasion)
ElementTree.Element("occasion", text=occasion)
In the last line, using a variable assigned ahead of time might be a simple way to remove the pycharm error without turning off warnings, depending on your tolerance for extra code.
I wrote the code, but I get the following message in pycharm(2019.1):
"Parameterized generics cannot be used with class or instance checks"
def data_is_valid(data):
keys_and_types = {
'comment': (str, type(None)),
'from_budget': (bool, type(None)),
'to_member': (int, type(None)),
'survey_request': (int, type(None)),
}
def type_is_valid(test_key, test_value):
return isinstance(test_value, keys_and_types[test_key])
type_is_valid('comment', 3)
I really do not understand this message well. Did I do something wrong or is it a bug in pycharm?
The error disappears if I explicitly typecast to tuple.
def type_is_valid(test_key, test_value):
return isinstance(test_value, tuple(keys_and_types[test_key]))
That looks like a bug in pycharm where it's a bit overeager in assuming that you're using the typing module in an unintended way. See this example here where that assumption would have been correct:
The classes in the typing module are only useful in a type annotation context, not to inspect or compare to actual classes, which is what isinstance tries to do. Since pycharm sees a simple object with square brackets that do not contain a literal, it jumps to the wrong conclusion you are seeing.
Your code is fine, you can use it exactly as it is.
I will not repeat after others that this is a pycharm bug. Just if you are a perfectionist and the error hurts your eyes, add the comment
# noqa
to the line where the "error" is
This was a known bug in PyCharm 2018, reported here.
There are some related bugs still in more recent PyCharm versions, e.g. PyCharm 2021.2.2, here.
In general, when you found that some PyCharm warning is incorrect, I would first isolate a simple test case where it becomes more clear what PyCharm is actually incorrect about. When it is clear that PyCharm is wrong with the warning, then you should always fill a bug report about it (or maybe search for existing bug reports first). Here this is clear because PyCharm says you cannot do sth, while in fact you can, so sth is wrong.
Since it's agreed it's a bug, you can suppress it in Pycharm by the line:
# noinspection PyTypeHints
I was studying the AdaDelta optimization algorithm so I tried to implement it in Python, but there is something wrong with my code, since I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'sqrt'
I did not find something about what is causing that error. According to the message, it's because of this line of code:
rms_grad = np.sqrt(self.e_grad + epsilon)
This line is similar to this equation:
RMS[g]t=√E[g^2]t+ϵ
I got the core equations of the algorithm in this article: http://ruder.io/optimizing-gradient-descent/index.html#adadelta
Just one more detail: I'm initializing the E[g^2]t matrix like this:
self.e_grad = (1 - mu)*np.square(nabla)
Where nabla is the gradient. Similar to this equation:
E[g2]t = γE[g2]t−1 + (1−γ)g2t
(the first term is equal to zero in the first iteration, just like the line of code above)
So I want to know if I'm initializing the E matrix the wrong way or I'm doing the square root inappropriately. I tried to use the pow() function but it doesn't work. If anyone could help me with this I would be very grateful, I'm trying this for weeks.
Additional details requested by andersource:
Here is the entire source code on github: https://github.com/pedrovbeltran/neural-networks-and-deep-learning/blob/experimental/modified-networks/network2_with_adadelta.py .
I think the problem is that self.e_grad_w is an ndarray of shape (2,) which further contains two additional ndarrays with 2d shapes, instead of directly containing data. This seems to be initialized in e_grad_initializer, in which nabla_w has the same structure. I didn't track where this comes from all the way back, but I believe once you fix this issue the problem will be resolved.
Upon copying retrain.py from the TensorFlow GitHub repository
And then opening it in PyCharm, on lines 794 and 802 PyCharm shows the following warning:
Type 'Variable' doesn't have expected attribute '__sub__'
Here is a screenshot if that helps:
Can somebody please explain:
What does this mean?
How can this be resolved or the warning suppressed?
Clearly PyCharm thinks that layer_weights does not have an attribute "__sub__", but what does this mean and why would a __sub__ attribute be necessary? The function variable_summaries() does not refer to an attribute __sub__ (copied/pasted starting at line 735):
def variable_summaries(var):
"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean', mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
# end of function
Can somebody explain why an attribute __sub__ would be necessary?
After reading the this post, I'm under the impression that a comment to suppress this warning could be added, possibly something like:
#type whatGoesHere: ??
#attribute __sub__: comment here?? # is this correct?
#param whatGoesHere: ??
Is something like this doable, and what should the comment be?
I prefer to not disable PyCharm's warnings as I find them helpful in many cases. Can somebody please provide some enlightenment on the above so as to avoid disabling this warning in PyCharm?
- Edit:
Thanks for the explanation Shu. For the moment this seems to be the best way to deal with this in PyCharm without disabling that inspection entirely:
# this comment is necessary to suppress an unnecessary PyCharm warning
# noinspection PyTypeChecker
variable_summaries(layer_weights)
If eventually somebody could inform me of a better option that would be great.
The - operator is called on var inside variable_summaries:
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
Normally Python would be looking for the __sub__ method of var when evaluating the expression var - mean. However, a tf.Variable instance is wrapped around state_ops.variable_op_v2 in order to support Ops on GPU/CPU and does not have a __sub__ method that Python would normally expect.
Therefore, this warning is inherent to the way TensorFlow heavily customizing standard Python operators on Tensorflow objects in order to support the expressions we are used to while enabling GPU/CPU computation with Tensorflow OPs.
I'd say you can safely ignore this warning on any Tensorflow object. Unfortunately I don't know how you can suppress this warning in PyCharm.
I'm a beginning programmer and I would like to integrate a function using ode 'dopri5', but I don't think I'm doing it correctly. The reference wasn't much help and I'm having an error that I don't recognize. So, originally I was using odeint, and it was working fine. Here is that chunk of code:
Itmp = odeint(te.rhs, Itmp, [xLim[i], xLim[i+1]], mxstep=10000,
atol=1e-11, rtol=1e-11, args=(f,))[1]
And my attempt to integrate using dopri5 is this:
Itmp = ode(te.rhs).set_integrator('dopri5', max_step=10000,atol=1e-11, rtol=1e-11)
The error I get is saying that Itmp is type 'ode' while I need it to be a float, like the odeint gives me.
Here is the specific error, (I try to subtract Itmp from a float):
unsupported operand type(s) for -: 'ode' and 'float'
And when I use the python debugger and try to print out Itmp, it gives me
<scipy.integrate._ode.ode object at 0x10d6ab410>
And after I continue it stops with the above error. I'm guessing I don't have the ode command written out correctly. Any help would be greatly appreciated!
The return value of the constructor of the ode class is an instance object of type ode. At this point, no integration has taken place. For that you need to call the step functions of the integrator. After the step, the new state is in the y field of the ode object.
Consult the documentation of the ode class for further details.
You should have noted that you did not pass neither the initial conditions nor the end of the integration interval to the integrator.