win32com com_error library not found - python

I am trying to use a Python API which makes use of the win32com library. I am using a sample code provided by the site, which works fine in one computer but fails in another. I am running it on Windows 7.
Specifically it fails while checking for some binaries:
import win32event
import win32com.client
from win32com.client import constants
win32com.client.gencache.EnsureModule('{51F35562-AEB4-4AB3-99E8-AC9666344B64}', 0, 4, 0)
win32com.client.gencache.EnsureModule('{4D454F08-B770-11D4-A18E-00C04F6BB385}', 0, 17, 12801)
I have been looking around the internet but although there are similar questions I haven't found the answer to my problem.
At first I though it was because some 32-64 bits difference in the Windows or Python distributions. I checked that the Windows version was the same in both computers (64-bit Windows 7) and that I was using the same Python version. I haven't found neither which module corresponds to the codes inside EnsureModule().
I am a newby in these aspects and I don't know well the COM objects, so sorry my description is not accurate enough.
Thanks in advance.
Edit:
The import of win32com works well. It doesn't return an error for example using the code of the question referenced as possible duplicate:
from win32com.client import Dispatch
xlApp = win32com.client.Dispatch("Excel.Application")
this doesn't return any error.
The exact code and error displayed is :
win32com.client.gencache.EnsureModule('{51F35562-AEB4-4AB3-99E8-AC9666344B64}', 0, 4, 0)
Traceback (most recent call last):
File "<ipython-input-2-18c1ec54bb40>", line 1, in <module>
win32com.client.gencache.EnsureModule('{51F35562-AEB4-4AB3-99E8-AC9666344B64}', 0, 4, 0)
File "C:\Users\jredondo\AppData\Local\Continuum\anaconda3\lib\site-packages\win32com\client\gencache.py", line 518, in EnsureModule
module = MakeModuleForTypelib(typelibCLSID, lcid, major, minor, progressInstance, bForDemand = bForDemand, bBuildHidden = bBuildHidden)
File "C:\Users\jredondo\AppData\Local\Continuum\anaconda3\lib\site-packages\win32com\client\gencache.py", line 287, in MakeModuleForTypelib
makepy.GenerateFromTypeLibSpec( (typelibCLSID, lcid, major, minor), progressInstance=progressInstance, bForDemand = bForDemand, bBuildHidden = bBuildHidden)
File "C:\Users\jredondo\AppData\Local\Continuum\anaconda3\lib\site-packages\win32com\client\makepy.py", line 223, in GenerateFromTypeLibSpec
tlb = pythoncom.LoadRegTypeLib(typelibCLSID, major, minor, lcid)
com_error: (-2147319779, 'Library not registered.', None, None)
I have tried in seven different computers with Windows 7 (it should be the same distribution since the one the company is subscribed) with different Python versions (Python 2.7 and 3.6 on 32 and 64 bits) and in four of them it works well while in the other three it returns the com_error described above.
I have installed the same version (32-bit Python 3.5.2) as in two different computers and in the first one both lines work well while in the second computer just the first one works and the second throws an error. For lines I mean:
win32com.client.gencache.EnsureModule('{51F35562-AEB4-4AB3-99E8-AC9666344B64}', 0, 4, 0)
win32com.client.gencache.EnsureModule('{4D454F08-B770-11D4-A18E-00C04F6BB385}', 0, 17, 12801)
Maybe there is a problem related to Windows updates. I haven't found neither to what it corresponds exactly the modules of the serial code inside the EnsureModule().

Related

Semantic versioning in dask repository

Why didn't the the commit 7138f470f0e55f2ebdb7638ddc4dfe2e78671403 trigger a new major version of dask since the function read_metadata is incompatible with older versions? The commit introduced the return of 4 values, but the old version only returned 3. According to semantic versioning this would have been the correct behavior.
cudf got broken, because of that commit.
Code from the issue:
>>> import cudf
>>> import dask_cudf
>>> dask_cudf.from_cudf(cudf.DataFrame({'a':[1,2,3]}),npartitions=1).to_parquet('test_parquet')
>>> dask_cudf.read_parquet('test_parquet')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nvme/0/vjawa/conda/envs/cudf_15_june_25/lib/python3.7/site-packages/dask_cudf/io/parquet.py", line 213, in read_parquet
**kwargs,
File "/nvme/0/vjawa/conda/envs/cudf_15_june_25/lib/python3.7/site-packages/dask/dataframe/io/parquet/core.py", line 234, in read_parquet
**kwargs
File "/nvme/0/vjawa/conda/envs/cudf_15_june_25/lib/python3.7/site-packages/dask_cudf/io/parquet.py", line 17, in read_metadata
meta, stats, parts, index = ArrowEngine.read_metadata(*args, **kwargs)
ValueError: not enough values to unpack (expected 4, got 3)
dask_cudf==0.14 is only compatible with dask<=0.19. In dask_cudf==0.16 the issue is fixed.
Edit: Link to the issue
Whilst Dask does not have a concrete policy around the value of the version string, one could argue that in this particular case, the IO code is non-core, and largely being pushed by upstream (pyarrow) development rather than our own initiative.
We are sorry that your code broke, but of course picking the correct versions of packages and expecting downstream packages to catch up is part of the open source ecosystem.
You may want to raise this as a github issue, if you'd like to get input from more of the dask maintenance team. (There isn't really much to "answer" here, from a stackoverflow perspective)

Writebyte and Readbyte esp QMC5883 on Python (Beaglebone Black) appear Errno 110

I could not find the suitable code for this BBB Python as many source codes would express more on Raspberry Pi and Arduino. I am using VMware and Ubuntu_18 to run the linux terminal in order to run my BBB.
This are my starting few lines code to try test on QMC5883 magnetometer that I'm trying to translate Arduino into Python version.
import Adafruit_GPIO.I2C as I2C
import math
QMC5883 = I2C.Device(0x0D, 1)
QMC5883.write8(0x0b,0x01)
However, the error keep appearing as following especially the writebyte and readbyte on the terminal BBB
root#beaglebone:~/user_python# python compass1.py
Traceback (most recent call last):
File "compass1.py", line 5, in <module>
QMC5883.write8(0x0b,0x01)
File "build/bdist.linux-armv7l/egg/Adafruit_GPIO/I2C.py", line 116, in write8
File "build/bdist.linux-armv7l/egg/Adafruit_PureIO/smbus.py", line 256, in write_byte_data
IOError: [Errno 110] Connection timed out]
The link that I'm following that to call the function is from this Adafruit_GPIO/I2C.py
Even using the smbus library the error still the same
import smbus
Anyone here know how to solve this Errno110 time out connection?
I am looking forward for anyone to guide me throughout for BBB Python getting on QMC5883 magnetometer to function.
Finally is one month plus and I found the solution for Beaglebone Black. Make some changes in the library code if possible to change in the library itself. Change the bus number from 1 to 2.
This GitHub link might help you https://github.com/RigacciOrg/py-qmc5883l to get your bearing degree (yaw rotation) in no time.

python win32com Attribute Error in constants

I tried to program with the win32com Python library to handle PowerPoint files. However when I pass constants to the function in the following way,
new_pre.ExportAsFixedFormat(options.output,
win32com.client.constants.ppFixedFormatTypePDF,
win32com.client.constants.ppFixedFormatIntentPrint,
win32com.client.constants.msoFalse,
win32com.client.constants.ppPrintHandoutHorizontalFirst,
win32com.client.constants.ppPrintOutputSixSlideHandouts,
win32com.client.constants.msoFalse,
win32com.client.constants.ppPrintAll,
False,
False,
False,
False,
PrintRange=None
)
it raises an AttributeError:
Traceback (most recent call last):
File "D:/SharedDocuments/DokyPpf/main.py", line 40, in <module>
win32com.client.constants.msoFalse,
File "C:\Users\xxx\AppData\Local\Programs\Python\Python36\lib\site-packages\win32com\client\__init__.py", line 178, in __getattr__
raise AttributeError(a)
AttributeError: msoFalse
Note that there is a similar question and the solution there is to use
EnsureDispatch("PowerPoint.Application")
instead of
Dispatch("")
But, I already used EnsureDispatch("PowerPoint.Application") and it's still not working...
Here is the link to the API reference of the corresponding VBA.
When you are running EnsureDispatch, win32com will automatically generate Python code corresponding to the type library of interest.
Depending a bit on your environment, the Python modules will be located somewhere around C:\Users\username\AppData\Local\Temp\gen_py\3.6, with the modules being grouped by the CLSID of the type library you have generated code for.
For PowerPoint 2016, you would find a folder named 91493440-5A91-11CF-8700-00AA0060263B whose __init__.py contains all the generated constants, including e.g. ppFixedFormatTypePDF.
Now, notably, the MsoTriState enum containing msoFalse is not part of the PowerPoint type library, which is why you are seeing your AttributeErrors.
From a quick look at the documentation, we find that the enum is part of the core Office libraries, contained in mso.dll. Depending once again on your setup, which version of Office you're using, and which architecture it targets, you should be able to dig up its ID by searching HKEY_CLASSES_ROOT in your registry editor:
Here, we note that the type library has a CLSID of {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52} and that the version number is 2.8. Knowing this, perhaps the easiest way to generate the code for the library, and thus the missing constant, is through win32com.client.gencache.EnsureModule('{2DF8D04C-5BFA-101B-BDE5-00AA0044DE52}', 0, 2, 8):
In [16]: win32com.client.constants.msoFalse
AttributeError: msoFalse
In [17]: win32com.client.gencache.EnsureModule('{2DF8D04C-5BFA-101B-BDE5-00AA0044DE52}', 0, 2, 8)
Out[17]: <module 'win32com.gen_py.2DF8D04C-5BFA-101B-BDE5-00AA0044DE52x0x2x8' from 'C:\\Users\\username\\AppData\\Local\
\Temp\\gen_py\\3.6\\2DF8D04C-5BFA-101B-BDE5-00AA0044DE52x0x2x8.py'>
In [18]: win32com.client.constants.msoFalse
Out[18]: 0
Now, knowing that msoFalse translates to a 0, you could, of course, also just skip the entire process and replace all occurrences of the constant with a 0.
__import__('win32com.gen_py', globals(), locals(), ['2DF8D04C-5BFA-101B-BDE5-00AA0044DE52x0x2x8'], 0)
this file will not be automatically loaded.

Python Import Error + DLL load failed + sys.path

in Python 2.7.10 Anaconda 2.3.0 (64-bit), if I write
sys.path.append('C:\\Anaconda\\sms-tools-master\\software\\models\\utilFunctions_C\\')
print sys.path
I get
C:\Anaconda\sms-tools-master\workspace\A1>python A1Part1.py
['C:\Anaconda\sms-tools-master\workspace\A1', 'C:\Anaconda\python27.zip',
'C:\Anaconda\DLLs', 'C:\Anaconda\lib', 'C:\Anaconda\lib\plat-win', 'C:\A
naconda\lib\lib-tk', 'C:\Anaconda', 'C:\Anaconda\lib\site-packages', 'C:\
Anaconda\lib\site-packages\Sphinx-1.3.1-py2.7.egg', 'C:\Anaconda\lib\site-
packages\cryptography-0.9.1-py2.7-win-amd64.egg', 'C:\Anaconda\lib\site-pack
ages\win32', 'C:\Anaconda\lib\site-packages\win32\lib', 'C:\Anaconda\lib
\site-packages\Pythonwin', 'C:\Anaconda\lib\site-packages\setuptools-17.1.
1-py2.7.egg', 'C:\Anaconda\sms-tools-master\software\models\utilFunctions_C
\']
Is this absolute way of adding to sys.path correct? Is there a relative way?
In the next line of python code I wrote
from utilFunctions_C import wavread
I instantly get
ImportError: cannot import name wavread
if I run the code in cmd but if I run the code inside IDLE I get:
['C:\Anaconda\sms-tools-master\workspace\A1',
'C:\Python27\Lib\idlelib', 'C:\Windows\system32\python27.zip',
'C:\Python27\DLLs', 'C:\Python27\lib',
'C:\Python27\lib\plat-win', 'C:\Python27\lib\lib-tk',
'C:\Python27', 'C:\Python27\lib\site-packages',
'C:\Anaconda\sms-tools-master\software\models\utilFunctions_C\']
Traceback (most recent call last): File
"C:\Anaconda\sms-tools-master\workspace\A1\A1Part1.py", line 8, in
from utilFunctions_C import wavread ImportError: DLL load failed: %1 is not a valid Win32 application.
So why there is a difference and how to tackle this issue? Thnx!
I commented
from utilFunctions_C import wavread
and used
from scipy.io.wavfile import read
Now my code is okay. I found that
utiLFunctions.wavread() is a wrapper that uses scipy.io.wavfile.read()
and scales the data to floating point between -1 and 1. If you open up
utilFunctions.py you will see that.
You can use scipy.io.wavfile.read as well, as long you scale the data
correctly looking at the datatype in the wav file. Due to scaling, for
wav files that store samples as int16, you will see that
scipy.io.wavfile.read returns values will be 32767 times the values
returned by utilFunctions.wavread
Lectures used the function to explain the process more explicitly.
Once you have got it, you can use the wrapper utilFunctions.wavread
for the rest of the course and in practical applications.
Scroll in
https://class.coursera.org/audio-002/forum/search?q=Cannot+import+name+wavread#15-state-query=wavread&15-state-page_num=1
for more details.

How to recover a broken python "cPickle" dump?

I am using rss2email for converting a number of RSS feeds into mail for easier consumption. That is, I was using it because it broke in a horrible way today: On every run, it only gives me this backtrace:
Traceback (most recent call last):
File "/usr/share/rss2email/rss2email.py", line 740, in <module>
elif action == "list": list()
File "/usr/share/rss2email/rss2email.py", line 681, in list
feeds, feedfileObject = load(lock=0)
File "/usr/share/rss2email/rss2email.py", line 422, in load
feeds = pickle.load(feedfileObject)
TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {}))
The only helpful fact that I have been able to construct from this backtrace is that the file ~/.rss2email/feeds.dat in which rss2email keeps all its configuration and runtime state is somehow broken. Apparently, rss2email reads its state and dumps it back using cPickle on every run.
I have even found the line containing that 'sxOYAAuyzSx0WqN3BVPjE+6pgPU'string mentioned above in the giant (>12MB) feeds.dat file. To my untrained eye, the dump does not appear to be truncated or otherwise damaged.
What approaches could I try in order to reconstruct the file?
The Python version is 2.5.4 on a Debian/unstable system.
EDIT
Peter Gibson and J.F. Sebastian have suggested directly loading from the
pickle file and I had tried that before. Apparently, a Feed class
that is defined in rss2email.py is needed, so here's my script:
#!/usr/bin/python
import sys
# import pickle
import cPickle as pickle
sys.path.insert(0,"/usr/share/rss2email")
from rss2email import Feed
feedfile = open("feeds.dat", 'rb')
feeds = pickle.load(feedfile)
The "plain" pickle variant produces the following traceback:
Traceback (most recent call last):
File "./r2e-rescue.py", line 8, in <module>
feeds = pickle.load(feedfile)
File "/usr/lib/python2.5/pickle.py", line 1370, in load
return Unpickler(file).load()
File "/usr/lib/python2.5/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.5/pickle.py", line 1133, in load_reduce
value = func(*args)
TypeError: 'str' object is not callable
The cPickle variant produces essentially the same thing as calling
r2e itself:
Traceback (most recent call last):
File "./r2e-rescue.py", line 10, in <module>
feeds = pickle.load(feedfile)
TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {}))
EDIT 2
Following J.F. Sebastian's suggestion around putting "printf
debugging" into Feed.__setstate__ into my test script, these are the
last few lines before Python bails out.
u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html': u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html'},
'to': None,
'url': 'http://arstechnica.com/'}
Traceback (most recent call last):
File "./r2e-rescue.py", line 23, in ?
feeds = pickle.load(feedfile)
TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {}))
The same thing happens on a Debian/etch box using python 2.4.4-2.
How I solved my problem
A Perl port of pickle.py
Following J.F. Sebastian's comment about how simple the pickle
format is, I went out to port parts of pickle.py to Perl. A couple
of quick regular expressions would have been a faster way to access my
data, but I felt that the hack value and an opportunity to learn more
about Python would be be worth it. Plus, I still feel much more
comfortable using (and debugging code in) Perl than Python.
Most of the porting effort (simple types, tuples, lists, dictionaries)
went very straightforward. Perl's and Python's different notions of
classes and objects has been the only issue so far where a bit more
than simple translation of idioms was needed. The result is a module
called Pickle::Parse which after a bit of polishing will be
published on CPAN.
A module called Python::Serialise::Pickle existed on CPAN, but I
found its parsing capabilities lacking: It spews debugging output all
over the place and doesn't seem to support classes/objects.
Parsing, transforming data, detecting actual errors in the stream
Based upon Pickle::Parse, I tried to parse the feeds.dat file.
After a few iteration of fixing trivial bugs in my parsing code, I got
an error message that was strikingly similar to pickle.py's original
object not callable error message:
Can't use string ("sxOYAAuyzSx0WqN3BVPjE+6pgPU") as a subroutine
ref while "strict refs" in use at lib/Pickle/Parse.pm line 489,
<STDIN> line 187102.
Ha! Now we're at a point where it's quite likely that the actual data
stream is broken. Plus, we get an idea where it is broken.
It turned out that the first line of the following sequence was wrong:
g7724
((I2009
I3
I19
I1
I19
I31
I3
I78
I0
t(dtRp62457
Position 7724 in the "memo" pointed to that string
"sxOYAAuyzSx0WqN3BVPjE+6pgPU". From similar records earlier in the
stream, it was clear that a time.struct_time object was needed
instead. All later records shared this wrong pointer. With a simple
search/replace operation, it was trivial to fix this.
I find it ironic that I found the source of the error by accident
through Perl's feature that tells the user its position in the input
data stream when it dies.
Conclusion
I will move away from rss2email as soon as I find time to
automatically transform its pickled configuration/state mess to
another tool's format.
pickle.py needs more meaningful error messages that tell the user
about the position of the data stream (not the poision in its own
code) where things go wrong.
Porting parts pickle.py to Perl was fun and, in the end, rewarding.
Have you tried manually loading the feeds.dat file using both cPickle and pickle? If the output differs it might hint at the error.
Something like (from your home directory):
import cPickle, pickle
f = open('.rss2email/feeds.dat', 'r')
obj1 = cPickle.load(f)
obj2 = pickle.load(f)
(you might need to open in binary mode 'rb' if rss2email doesn't pickle in ascii).
Pete
Edit: The fact that cPickle and pickle give the same error suggests that the feeds.dat file is the problem. Probably a change in the Feed class between versions of rss2email as suggested in the Ubuntu bug J.F. Sebastian links to.
Sounds like the internals of cPickle are getting tangled up. This thread (http://bytes.com/groups/python/565085-cpickle-problems) looks like it might have a clue..
'sxOYAAuyzSx0WqN3BVPjE+6pgPU' is most probably unrelated to the pickle's problem
Post an error traceback for (to determine what class defines the attribute that can't be called (the one that leads to the TypeError):
python -c "import pickle; pickle.load(open('feeds.dat'))"
EDIT:
Add the following to your code and run (redirect stderr to file then use 'tail -2' on it to print last 2 lines):
from pprint import pprint
def setstate(self, dict_):
pprint(dict_, stream=sys.stderr, depth=None)
self.__dict__.update(dict_)
Feed.__setstate__ = setstate
If the above doesn't yield an interesting output then use general troubleshooting tactics:
Confirm that 'feeds.dat' is the problem:
backup ~/.rss2email directory
install rss2email into virtualenv/pip sandbox (or use zc.buildout) to isolate the environment (make sure you are using feedparser.py from the trunk).
add couple of feeds, add feeds until 'feeds.dat' size is greater than the current. Run some tests.
try old 'feeds.dat'
try new 'feeds.dat' on existing rss2email installation
See r2e bails out with TypeError bug on Ubuntu.

Categories

Resources