I'm trying to get the name of a WMI win32 class. But the __name__ attribute is not defined for it.
>> import wmi
>> machine = wmi.WMI()
>> machine.Win32_ComputerSystem.__name__
I get the following error:
Traceback (most recent call last):
File "<pyshell#21>", line 1, in <module>
machine.Win32_ComputerSystem.__name__
File "C:\Python27\lib\site-packages\wmi.py", line 796, in __getattr__
return _wmi_object.__getattr__ (self, attribute)
File "C:\Python27\lib\site-packages\wmi.py", line 561, in __getattr__
return getattr (self.ole_object, attribute)
File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 457, in __getattr__
raise AttributeError(attr)
AttributeError: __name__
I thought that the __name__ attribute is defined for all Python functions, so I don't know what the problem is here. How is it possible that this function doesn't have that attribute?
OK, The reason that I thought it was a method is because machine.Win32_ComputerSystem() is defined, but I guess that isn't enough for something to be a method. I realise that it isn't a method.
However, this doesn't work:
>> machine.Win32_ComputerSystem.__class__.__name__
'_wmi_class'
I want it to return 'Win32_ComputerSystem'. How can I do this?
From what I can tell looking at the documentation (specifically, based on this snippet), wmi.Win32_ComputerSystem is a class, not a method. If you want to get its name you could try:
machine.Win32_ComputerSystem.__class__.__name__
I've found a way to get the output that I want, however it doesn't satisfy me.
repr(machine.Win32_ComputerSystem).split(':')[-1][:-1]
returns: 'Win32_ComputerSystem'
There must be a more Pythonic way to do this.
Related
I am really new with ontologies especially with owlready2. I loaded an Ontology the basic example Pizza and imported I think successfully on python (I checked whether I can see the classes which I can so..)
Than I used the following code to search one class specifically with the method search():
from owlready2 import *
onto_path.append(r"C:/Users/AyselenKuru/Desktop/owl_docs/owlpizza.owl")
onto=get_ontology(r"C:/Users/AyselenKuru/Desktop/owl_docs/owlpizza.owl")
onto.load()
am= onto.search_one(is_a= onto.American)
for x in onto.classes():
print(x)
I want to know how can I search/get one specific Class and an Attribute and I get the following error message:
Traceback (most recent call last):
File "c:\Users\AyselenKuru\Desktop\pizza_ex1.py", line 6, in <module>
am= onto.search_one(is_a = onto.American)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\AyselenKuru\AppData\Local\Programs\Python\Python311\Lib\site-packages\owlready2\namespace.py", line 395, in search_one
def search_one(self, **kargs): return self.search(**kargs).first()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\AyselenKuru\AppData\Local\Programs\Python\Python311\Lib\site-packages\owlready2\namespace.py", line 364, in search
else: v2 = v.storid
^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'storid'
Problem solved itself, the example owl File had errors with IRI that caused a Problem with the search
I placed a ClientConnectionError exception in a multiprocessing.Queue that was generated by asyncio. I did this to pass an exception generated in asyncio land back to a client in another thread/process.
My assumption is that this exception occurred during the deserialization process reading the exception out of the queue. It looks pretty much impossible to reach otherwise.
Traceback (most recent call last):
File "model_neural_simplified.py", line 318, in <module>
main(**arg_parser())
File "model_neural_simplified.py", line 314, in main
globals()[command](**kwargs)
File "model_neural_simplified.py", line 304, in predict
next_neural_data, next_sample = reader.get_next_result()
File "/project_neural_mouse/src/asyncs3/s3reader.py", line 174, in get_next_result
result = future.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "model_neural_simplified.py", line 245, in read_sample
f_bytes = s3f.read(read_size)
File "/project_neural_mouse/src/asyncs3/s3reader.py", line 374, in read
size, b = self._issue_request(S3Reader.READ, (self.url, size, self.position))
File "/project_neural_mouse/src/asyncs3/s3reader.py", line 389, in _issue_request
response = self.communication_channels[uuid].get()
File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/usr/local/lib/python3.6/dist-packages/aiohttp/client_exceptions.py", line 133, in __init__
super().__init__(os_error.errno, os_error.strerror)
AttributeError: 'str' object has no attribute 'errno'
I figure it's a long shot to ask, but does anyone know anything about this issue?
Python 3.6.8, aiohttp.__version__ == 3.6.0
Update:
I managed to reproduce the issue (credit to Samuel in comments for improving the minimal reproducible test case, and later xtreak at bugs.python.org for furthing distilling it to a pickle-only test case):
import pickle
ose = OSError(1, 'unittest')
class SubOSError(OSError):
def __init__(self, foo, os_error):
super().__init__(os_error.errno, os_error.strerror)
cce = SubOSError(1, ose)
cce_pickled = pickle.dumps(cce)
pickle.loads(cce_pickled)
./python.exe ../backups/bpo38254.py
Traceback (most recent call last):
File "/Users/karthikeyansingaravelan/stuff/python/cpython/../backups/bpo38254.py", line 12, in <module>
pickle.loads(cce_pickled)
File "/Users/karthikeyansingaravelan/stuff/python/cpython/../backups/bpo38254.py", line 8, in __init__
super().__init__(os_error.errno, os_error.strerror)
AttributeError: 'str' object has no attribute 'errno'
References:
https://github.com/aio-libs/aiohttp/issues/4077
https://bugs.python.org/issue38254
OSError has a custom __reduce__ implementation; unfortunately, it's not subclass friendly for subclasses that don't match the expected arguments. You can see the intermediate state of the pickling by calling __reduce__ manually:
>>> SubOSError.__reduce__(cce)
(modulename.SubOSError, (1, 'unittest'))
The first element of the tuple is the callable to call, the second is the tuple of arguments to pass. So when it tries to recreate your class, it does:
modulename.SubOSError(1, 'unittest')
having lost the information about the OSError you were originally created with.
If you must accept arguments that don't match what OSError.__reduce__/OSError.__init__ expects, you're going to need to write your own __reduce__ override to ensure the correct information is pickled. A simple version might be:
class SubOSError(OSError):
def __init__(self, foo, os_error):
self.foo = foo # Must preserve information for pickling later
super().__init__(os_error.errno, os_error.strerror)
def __reduce__(self):
# Pickle as type plus tuple of args expected by type
return type(self), (self.foo, OSError(*self.args))
With that design, SubOSError.__reduce__(cce) would now return:
(modulename.SubOSError, (1, PermissionError(1, 'unittest')))
where the second element of the tuple is the correct arguments needed to recreate the instance (the change from OSError to PermissionError is expected; OSError actually returns its own subclasses based on the errno).
This issue was fixed and merged to master in aiohttp on 25 Sep 2019. I'll update this answer in the future if I note a version that the fix goes into (feel free to edit this answer in the future to note a version containing this update).
Git issue with the fix:
https://github.com/aio-libs/aiohttp/issues/4077
Please help me to write a customized partitioner function in python for spark.
I have a file telling the mapping between the entry data key and partition id, I first load it into a dict variable "data_to_partition_map" in main.py
then in Spark
sc.parallelize(input_lines).partitionBy(numPartitions=xx, partitionFunc=lambda x : data_to_partition_map[x])
When I run this code locally, it gives error:
Traceback (most recent call last):
File "/home/weiyu/workspace/dice/process_platform_spark/process/roadCompile/main.py", line 111, in <module>
.partitionBy(numPartitions=tile_partitioner.num_partitions, partitionFunc=lambda x: tile_tasks_in_partitions[x])
File "/home/weiyu/app/odps-spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1785, in partitionBy
File "/home/weiyu/app/odps-spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1392, in __call__
File "/home/weiyu/app/odps-spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 289, in get_command_part
AttributeError: 'function' object has no attribute '_get_object_id'
It seems Spark cannot serialize lambda object, does someone have any idea about this error and tell me how to fix it ? Thanks very much
Have u tried to use a function that simply return the dict item, and pass it as partiction function?
def return_key(x):
return your_dict[x]
Pass it as partitionFunction.
I was coding something at work and it seems that some C API functions provided by python are not working. I tried mainly the function that check types, for example:
import ctypes
python33_dll = ctypes.CDLL('python33.dll')
a_float = python33_dll.PyFloat_FromDouble(ctypes.c_float(2.0))
python33_dll.PyFloat_Check(a_float)
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
python33_dll.PyFloat_Check(a_float)
File "C:\Python33\lib\ctypes\__init__.py", line 366, in __getattr__
func = self.__getitem__(name)
File "C:\Python33\lib\ctypes\__init__.py", line 371, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'PyFloat_Check' not found
Is there anything specific I need to do to use this function, or is it a bug?
docs.python.org/3.3/c-api/float.html?highlight=double#PyFloat_Check
PyFloat_Check() is a macro. You will need to expand it manually and call the correct function instead.
I want to put an instance of scapy.layers.dhcp.BOOTP on a multiprocessing.Queue. Every time I call put() the following exception occures:
Traceback (most recent call last):
File "/usr/lib/python2.6/multiprocessing/queues.py", line 242, in _feed
send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
Of cause trying to pickle the instance directly using pickle.dumps() also fails. But why is this class not picklable?
For all those who don't have scapy installed:
class BOOTP(Packet):
name = "BOOTP"
fields_desc = [ ByteEnumField("op",1, {1:"BOOTREQUEST", 2:"BOOTREPLY"}),
ByteField("htype",1),
ByteField("hlen",6),
ByteField("hops",0),
IntField("xid",0),
ShortField("secs",0),
FlagsField("flags", 0, 16, "???????????????B"),
IPField("ciaddr","0.0.0.0"),
IPField("yiaddr","0.0.0.0"),
IPField("siaddr","0.0.0.0"),
IPField("giaddr","0.0.0.0"),
Field("chaddr","", "16s"),
Field("sname","","64s"),
Field("file","","128s"),
StrField("options","") ]
def guess_payload_class(self, payload):
if self.options[:len(dhcpmagic)] == dhcpmagic:
return DHCP
else:
return Packet.guess_payload_class(self, payload)
def extract_padding(self,s):
if self.options[:len(dhcpmagic)] == dhcpmagic:
# set BOOTP options to DHCP magic cookie and make rest a payload of DHCP options
payload = self.options[len(dhcpmagic):]
self.options = self.options[:len(dhcpmagic)]
return payload, None
else:
return "", None
def hashret(self):
return struct.pack("L", self.xid)
def answers(self, other):
if not isinstance(other, BOOTP):
return 0
return self.xid == other.xid
Are there any other ways to "transport" this instance to another subprocess?
Well, the problem is that you can't pickle the function type. It's what you get when you do type(some_user_function). See this:
>>> import types
>>> pickle.dumps(types.FunctionType)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'picke' is not defined
>>> pickle.dumps(types.FunctionType)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\pickle.py", line 1366, in dumps
Pickler(file, protocol).dump(obj)
File "C:\Python26\lib\pickle.py", line 224, in dump
self.save(obj)
File "C:\Python26\lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "C:\Python26\lib\pickle.py", line 748, in save_global
(obj, module, name))
pickle.PicklingError: Can't pickle <type 'function'>: it's not found as __built
n__.function
So such a function type is stored somewhere on the object you try to send. It's not in the code you pasted, so i guess it's on the superclass.
Maybe you can simply send all the arguments required to create a instance of scapy.layers.dhcp.BOOTP instead of the instance to avoid the problem?
The other thing that may help to diagnose problems like these is to use the pickle module instead of cPickle (which must be getting used implicitly by queues.py)
I had a similar situation, getting a completely unhelpful message,
Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
I wandered into the debugger, found the object being pickled, and tried passing it to
pickle.dump(myobj,open('outfile','w'),-1)
and got a much more helpful:
PicklingError: Can't pickle <function findAllRefs at 0x105809f50>:
it's not found as buildsys.repoclient.findAllRefs
Which pointed much more directly at the problematic code.
A solution I use is to str the packet and then put it on the queue...