I`m trying to learn the programming on quantum computers.
I have installed qiskit in VS Code (all qiskit extentions available in VS Code market) , python compilator (from Vs Code market "Python" and "Python for VSCode"). I have set up my qikit API for correct working
When I run the exemple I get erros: "Instance of 'QuantumCircuit' has no 'h' member"
What shoud I do?
The code:
from qiskit import ClassicalRegister, QuantumRegister
from qiskit import QuantumCircuit, execute
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c)
job_sim = execute(qc, 'local_qasm_simulator')
sim_result = job_sim.result()
print(sim_result.get_counts(qc))
========================
The same error after adding comment # pylint: disable=no-member
The errors in question are coming from pylint, a linter, not from python itself. While pylint is pretty clever, some constructs (particularly those involving dynamically-added properties) are beyond its ability to understand. When you encounter situations like this, the best course of action is twofold:
Check the docs, code, etc. to make absolutely sure the code that you've written is right (i.e. verify that the linter result is a false positive)
Tell the linter that you know what you're doing and it should ignore the false positive
user2357112 took care of the first step in the comments above, demonstrating that the property gets dynamically set by another part of the library.
The second step can be accomplished for pylint by adding a comment after each of the offending lines telling it to turn of that particular check for that particular line:
qc.h(q[0]) # pylint: disable=no-member
Related
I am trying to change the user of a print job in the queue, as I want to create it on a service account but send the job to another users follow-me printing queue. I'm using the win32 module in python. Here is an example of my code:
from win32 import win32print
JOB_INFO_LEVEL = 2
pclExample = open("sample.pcl")
printer_name = win32print.GetDefaultPrinter()
hPrinter = win32print.OpenPrinter(printer_name)
try:
jobID = win32print.StartDocPrinter(hPrinter, 1, ("PCL Data test", None, "RAW"))
# Here we try to change the user by extracting the job and then setting it again
jobInfoDict = win32print.GetJob(hPrinter, jobID , JOB_INFO_LEVEL )
jobInfoDict["pUserName"] = "exampleUser"
win32print.SetJob(hPrinter, jobID , JOB_INFO_LEVEL , jobInfoDict , win32print.JOB_CONTROL_RESUME )
try:
win32print.StartPagePrinter(hPrinter)
win32print.WritePrinter(hPrinter, pclExample)
win32print.EndPagePrinter(hPrinter)
finally:
win32print.EndDocPrinter(hPrinter)
finally:
win32print.ClosePrinter(hPrinter)
The problem is I get an error at the win32print.SetJob() line. If JOB_INFO_LEVEL is set to 1, then I get the following error:
(1804, 'SetJob', 'The specified datatype is invalid.')
This is a known bug to do with how the C++ works in the background (Issue here).
If JOB_INFO_LEVEL is set to 2, then I get the following error:
(1798, 'SetJob', 'The print processor is unknown.')
However, this is the processor that came from win32print.GetJob(). Without trying to change the user this prints fine, so I'm not sure what is wrong.
Any help would be hugely appreciated! :)
EDIT:
Using Python 3.8.5 and Pywin32 303
At the beginning I thought it was a misunderstanding (I was also a bit skeptical about the bug report), mainly because of the following paragraph (which apparently seems to be wrong) from [MS.Docs]: SetJob function (emphasis is mine):
The following members of a JOB_INFO_1, JOB_INFO_2, or JOB_INFO_4 structure are ignored on a call to SetJob: JobId, pPrinterName, pMachineName, pUserName, pDrivername, Size, Submitted, Time, and TotalPages.
But I did some tests and ran into the problem. The problem is as described in the bug: filling JOB_INFO_* string members (which are LPTSTRs) with char* data.
Submitted [GitHub]: mhammond/pywin32 - Fix: win32print.SetJob sending ANSI to UNICODE API (and none of the 2 errors pops up). It was merged to main on 220331.
When testing the fix, I was able to change various properties of an existing job, I was amazed that it didn't have to be valid data (like below), I'm a bit curious to see what would happen when the job would be executed (as now I don't have a connection to a printer):
Change pUserName to str(random.randint(0, 10000)) to make sure it changes on each script run (PrintScreens taken separately and assembled in Paint):
Ways to go further:
Wait for a new PyWin32 version (containing this fix) to be released. This is the recommended approach, but it will also take more time (and it's unclear when it will happen)
Get the sources, either:
from main
from b303 (last stable branch), and apply the (above) patch(1)
build the module (.pyd) and copy it in the PyWin32's site-packages directory on your Python installation(s). Faster, but it requires some deeper knowledge, and maintenance might become a nightmare
Footnotes
#1: Check [SO]: Run / Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition? (#CristiFati's answer) (Patching UTRunner section) for how to apply patches (on Win).
It seems as though some scipy modules are messing with my warning filters. Consider the following code. My understanding is that it should only throw one warning because of the "once" filter I supplied to my custom Warning class. However, the warning after the scipy import gets shown as well.
This is with python 3.7 and scipy 1.6.3.
import warnings
class W(DeprecationWarning): pass
warnings.simplefilter("once", W)
warnings.warn('warning!', W)
warnings.warn('warning!', W)
from scipy import interpolate
warnings.warn('warning!', W)
This only seems to happen when I import certain scipy modules. A generic "import scipy" doesn't do this.
I've narrowed it down to the filters set in scipy.special.sf_error.py and scipy.sparse.__init__.py. I don't see how that code would cause the problem, but it does. When I comment those filtersout, my code works as expected.
Am I misunderstanding something? Is there a work-around that that doesn't involved overwriting warnings.filterwarnings/warnings.simplefilters?
This an open Python bug: https://bugs.python.org/issue29672.
Note, in particular, the last part of the comment by Tom Aldcroft:
Even a documentation update would be useful. This could explain not only catch_warnings(), but in general the unexpected feature that if any package anywhere in the stack sets a warning filter, then that globally resets whether a warning has been seen before (via the call to _filters_mutated()).
The code in scipy/special/sf_error.py sets a warning filter, and that causes a global reset of which warnings have been seen before. (If you add another call of warnings.warn('warning!', W) to the end of your sample code, you should see that it does not raise a warning.)
I am a beginner in fenics and I am trying to resolve Poisson equation with a boundary condition which is a Perlin noise generated by opensimplex, a Python library.
I'm trying to define f, the boundary condition by Expression().
I tried Expression('function(x[0],x[1],x[2])') where function (x,y,z)=opensimplex.tmp.noise3d(x,y,z)). However, as this opensimplex function is not managed by C++, I got a compilation error; Compilation failed!.
Is there any solution to overcome this error ?
I had a similar problem when starting to work with transient flows in FEniCS.
Defining a subclass for UserExpression, before defining your variational form should enable the compilation.
from dolfin import *
parameters["reorder_dofs_serial"] = True
### (Here you add your domain generation and FunctionSpace definition)
class Expression(SubDomain):
def inside(self,a,on_boundary):
return (x[0]) and (x[1]) and (x[2]) and on_boundary
f=MyExpression(2.0)
print(assemble(f*dx(domain=UnitIntervalMesh(1))))
If this still doesn't enable compilation, please attach the relevant portions of your code, and we can try to work through them.
If you have a fixed dimension order (e.g. 2-D), you might also have to add this after reordering dofs:
parameters["form_compiler"]["quadrature_degree"]=2
Good luck!
I just put together the following "minimum" repro case (minimum in quotes because I wanted to ensure pylint threw no other errors, warnings, hints, or suggestions - meaning there's a bit of boilerplate):
pylint_error.py:
"""
Docstring
"""
import numpy as np
def main():
"""
Main entrypoint
"""
test = np.array([1])
print(test.shape[0])
if __name__ == "__main__":
main()
When I run pylint on this code (pylint pylint_error.py) I get the following output:
$> pylint pylint_error.py
************* Module pylint_error
pylint_error.py:13:10: E1136: Value 'test.shape' is unsubscriptable (unsubscriptable-object)
------------------------------------------------------------------
Your code has been rated at 1.67/10 (previous run: 1.67/10, +0.00)
It claims that test.shape is not subscriptable, even though it quite clearly is. When I run the code it works just fine:
$> python pylint_error.py
1
So what's causing pylint to become confused, and how can I fix it?
Some additional notes:
If I declare test as np.arange(1) the error goes away
If I declare test as np.zeros(1), np.zeros((1)), np.ones(1), or np.ones((1)) the error does not go away
If I declare test as np.full((1), 1) the error goes away
Specifying the type (test: np.ndarray = np.array([1])) does not fix the error
Specifying a dtype (np.array([1], dtype=np.uint8)) does not fix the error
Taking a slice of test (test[:].shape) makes the error go away
My first instinct says that the inconsistent behavior with various NumPY methods (arange vs zeros vs full, etc) suggests it's just a bug in NumPY. However it's possible there's some underlying concept to NumPY that I'm misunderstanding. I'd like to be sure I'm not writing code with undefined behavior that's only working on accident.
I don't have enough reputation to comment, but it looks like this is an open issue: https://github.com/PyCQA/pylint/issues/3139
Until the issue is resolved on their end, I would just change the line to
print(test.shape[0]) # pylint: disable=E1136 # pylint/issues/3139
to my pylintrc file.
As of November 2019:
As mentioned by one of the users in the discussion on GitHub you could resolve the problem by downgrading both pylint and astroid, e.g. in requirements.txt
astroid>=2.0, <2.3
pylint>=2.3, <2.4
or
pip install astroid==2.2.5 & pip install pylint==2.3.1
This was finally fixed with the release of astroid 2.4.0 in May 2020.
https://github.com/PyCQA/pylint/issues/3139
I'm working on some machine learning code and today I've lost about 6 hours because simple typo.
It was this:
numpy.empty(100,100)
instead of
numpy.empty([100,100])
As I'm not really used to numpy, so I forgot the brackets. The code happily crunched the numbers and at the end, just before saving results to disk, it crashed on that line.
Just to put things in perspective I code on remote machine in shell, so IDE is not really an option. Also I doubt IDE would catch this.
Here's what I already tried:
running pylint - well pylint kinda works. After I've disabled everything apart of errors and warnings, it even seem to be usefull. But pylint have serious issue with imported modules. As seen on official bug tracker devs know about it, but cannot/won't do anything about it. There is suggested workaround, but ignoring whole module, would not help in my case.
running pychecker - if I create code snippet with the mistake I made, the pychecker reports error - same error as python interpreter. However if I run pychecker on the actual source file (~100 LOC) it reported other errors (unused vars, unused imports, etc.); but the faulty numpy line was skipped.
At last I have tried pyflakes but it does even less checking than pychecker/pylint combo.
So is there any reliable method which can check code in advance? Without actually running it.
A language with stronger type checking would have been able to save you from this particular error, but not from errors in general. There are plenty of ways to go wrong that pass static type checking. So if you have computations that takes a long time, it makes sense to adopt the following strategies:
Test the code from end to end on small examples (that run in a few seconds or minutes) before running it on big data that will consume hours.
Structure long-running computations so that intermediate results are saved to files on disk at appropriate points in the computation. This means that when something breaks, you can fix the problem and restart the computation from the last save point.
Run the code from the interactive interpreter, so that in the event of an exception you are returned to the interactive session, giving you a chance of being able to recover the data using a post-mortem debugging session. For example, suppose I have some long-running computation:
def work(A, C):
B = scipy.linalg.inv(A) # takes a long time when A is big
return B.dot(C)
I run this from the interactive interpreter and it raises an exception:
>>> D = work(A, C)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "q22080243.py", line 6, in work
return B.dot(C)
ValueError: matrices are not aligned
Oh no! I forgot to transpose C! Do I have to do the inversion of A again? Not if I call pdb.pm:
>>> import pdb
>>> pdb.pm()
> q22080243.py(6)work()
-> return B.dot(C)
(Pdb) B
array([[-0.01129249, 0.06886091, ..., 0.08530621, -0.03698717],
[ 0.02586344, -0.04872148, ..., -0.04853373, 0.01089163],
...,
[-0.11463087, 0.15048804, ..., 0.0722889 , -0.12388141],
[-0.00467437, -0.13650975, ..., -0.13894875, 0.02823997]])
Now, unlike in Lisp, I can't just set things right and continue the execution. But at least I can recover the intermediate results:
(Pdb) D = B.dot(C.T)
(Pdb) numpy.savetxt('result.txt', D)
Do you use unit tests? There is really no better way.