Where does this pywinauto exception get its list from? - python

If I run:
from pywinauto.findwindows import find_windows
find_windows(best_match="affafa")
I get an exception that returns
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\Python27\lib\site-packages\pywinauto\findwindows.py", line 204, in find_windows
best_match, wrapped_wins)
File "c:\Python27\lib\site-packages\pywinauto\findbestmatch.py", line 497, in find_best_control_matches
raise MatchError(items = name_control_map.keys(), tofind = search_text)
pywinauto.findbestmatch.MatchError: Could not find 'affafa' in '[u'CabinetWClass', u'Inbox (1,455) - ******#gmail.com - Gmail - Google Chrome', u'Chrome_WidgetWin_1', '', u'*new 2 - Notepad++Notepad++', u'python - Where does this pywinauto exception get its list from? - Stack Overflow - Google ChromeChrome_WidgetWin_1', u'C:\\Windows\\system32\\cmd.exe - pythonConsoleWindowClass1', u'C:\\Windows\\system32\\cmd.exe - pythonConsoleWindowClass0', u'C:\\Windows\\system32\\cmd.exe - pythonConsoleWindowClass2']' # this list has been shortened for security reasons
What I want to do is,Find out where the giant list of processes is coming from and directly call that.
So far ive been messing around with
find_windows(visible_only = False)
# and some of the other options listed in findwindows.py
but all the find_windows options only return a list of numbers which from documentation, i think are process IDs...., which for some reason do NOT match up with what i have ( for example, i create a "Calculator" and its process ID is 6566, and then i run find_windows() and i cannot find the process ID in it. So that's another issue I'm having.... but i can solve this problem if i can get my giant list.
This is my first question asked on stack overflow. I hope I made you guys proud

If you want to get a list of the names for all windows, you should use the next construction.
handles = pywinauto.findwindows.find_windows()
for w_handle in handles:
wind = app.window_(handle=w_handle)
print wind.Texts()
You may filter/extend the list by the next arguments of find_windows function:
top_level_only Top level windows only (default=True)
visible_only Visible windows only (default=True)
enabled_only Enabled windows only (default=True)
active_only Active windows only (default=False)

Related

Python OPC UA Client - Write variable using BrowseName

I can't find the correct syntax for assigning a value to a variable using its BrowseName.
I am testing with the 'flag1' boolean variable because it is easier to debug. But my goal is be able to write in all variables, including the arrays.
If I try to use the index number it works fine.
import pyOPCClient as opc
client = opc.opcConnect('192.168.5.10')
opc.write_value_bool(client, 'ns=4;s="opcData"."flag1"', True)
client.disconnect()
Here is my function to write boolean
##### Function to WRITE a Boolean into Bool Object Variable - Requires Object Name #####
def write_value_bool(client, node_id, value):
client_node = client.get_node(node_id) # get node
client_node_value = value
client_node_dv = ua.DataValue(ua.Variant(client_node_value, ua.VariantType.Boolean))
client_node.set_value(client_node_dv)
print("Value of : " + str(client_node) + ' : ' + str(client_node_value))
I am getting this error:
PS C:\Users\ALEMAC\Documents\Python Scripts> & C:/ProgramData/Anaconda3/python.exe "c:/Users/ALEMAC/Documents/Python Scripts/opctest.py"
Requested session timeout to be 3600000ms, got 30000ms instead
Traceback (most recent call last):
File "c:\Users\ALEMAC\Documents\Python Scripts\opctest.py", line 5, in <module>
opc.write_value_bool(client, 'ns=4;s="opcData"."flag1"', True)
File "c:\Users\ALEMAC\Documents\Python Scripts\pyOPCClient.py", line 49, in write_value_bool
client_node.set_value(client_node_dv)
File "c:\Users\ALEMAC\Documents\Python Scripts\opcua\common\node.py", line 217, in set_value
self.set_attribute(ua.AttributeIds.Value, datavalue)
File "c:\Users\ALEMAC\Documents\Python Scripts\opcua\common\node.py", line 263, in set_attribute
result[0].check()
File "c:\Users\ALEMAC\Documents\Python Scripts\opcua\ua\uatypes.py", line 218, in check
raise UaStatusCodeError(self.value)
opcua.ua.uaerrors._auto.BadNodeIdUnknown: "The node id refers to a node that does not exist in the server address space."(BadNodeIdUnknown)
I see you use the pyOPCClient package. I´m not sure if this is maintaned anymore (Last Update: 2014-01-09 see here).
You can switch to opcua-asyncio which can address nodes with the browse services like this:
myvar = await client.nodes.root.get_child(["0:Objects",..., "4:flag1"])
And here is the complete example
You are using a mix of NodeId and BrowseName. Just Keep the node NodeId
opc.write_value_bool(client, 'ns=4;i=20', True)
To write other datapoint "args" you will also have to find their NodeId.
You can use the OPC UA Service Browse for that.
Apparently the S7-1200 by default, when you create a tag in S7-1200 the NodeID is set to NUMERIC. Looks lik you can change it though, using another software called SiOME.
https://support.industry.siemens.com/cs/document/109793221/how-do-you-change-the-node-id-identifier-type-of-the-nodes-in-the-s7-1200-opc-ua-server-from-numeric-to-string-?dti=0&dl=en&lc=nl-NL

Python Read Excel file with Error - "We Found a problem with some content..."

Here is my problem. We have an Excel based report that business users enter comments into two separate fields, as well as selecting a code form a drop down. We then have a manual process that collects those files and pushes the comments and codes to a Snowflake table to be able to use in various reports.
I am trying to improve the process with a Python script that will collect the files, copy them to a staging_folder location, then read in the data from the sheet, append it all together, do some cleanup and push to Snowflake. The plan is that this would be completely automated - but this is where we run into issues.
Initial step works perfectly. I have a loop that grabs the files based on the previous business day date, copies them to a staging folder. There are typically 32 files each day.
Next step reads those files to append to a dataframe. Here is the function that is loading the Excel files in my Python script.
def load_files():
file_list = glob.glob(file_path + r'\*')
df = pd.DataFrame()
print("Importing data to Pandas DF...")
for file in file_list:
try:
wb = load_workbook(file)
ws = wb["Daily Outs"]
data = ws.values
cols = next(data)[1:]
data = list(data)
idx = [r[0] for r in data]
data = (islice(r, 1, None) for r in data)
data_1 = pd.DataFrame(data, index=idx, columns=cols)
df = df.append(data_1, sort=False)
print(file + " Imported to Df...")
except Exception as e:
print("Error: " + e + " When attempting to open file: " + file)
# error_notify(e)
print(df.head(10))
return df
The problem is when we have files that have some sort of corruption. The files when opened manually will show an error like the one below.
I thought with my try, except code above this would catch an error like this and alert me with the error_notify(e) function. However, we get a result where the Python script crashes with an error like this: zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred.
There is more to the error, but I only copied & pasted this part in some communication with some folks int he office. Impossible to replicate the error on our own - I have no idea how the files get corrupted in this way - except that there are multiple people accessing the files throughout the day.
The way to make the file readable is completely manual - we must open the file, get that error, hit yes, and save the file over the existing one. Then re-launch the script. But since the try, except isn't catching it and alerting us to the failure, we have to run the script manually to see if it works or not.
Two questions - am I doing something incorrect in my try, except command? I am admittedly weak in error catching so my first thought is there is more I can do there to make that work. Secondly, is there a Python way to get past that error in the Excel workbook files?
Here is the error text:
Traceback (most recent call last):
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service
Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 48, in load_files
wb = load_workbook(file)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\reader\excel.py", line 314, in load_workbook
data_only, keep_links)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\reader\excel.py", line 124, in init
self.archive = _validate_archive(fn)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\reader\excel.py", line 96, in _validate_archive
archive = ZipFile(filename, 'r')
File "C:\ProgramData\Anaconda3\lib\zipfile.py", line 1222, in init
self._RealGetContents()
File "C:\ProgramData\Anaconda3\lib\zipfile.py", line 1289, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 123, in <module>
main()
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 86, in main
df_output = df_clean()
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 68, in df_clean
df = load_files()
File "G:/Replenishment/Reporting/00 - I&A Replenishment/02 - Service Level/Daily Outs Comment Capture/Python/daily_outs_missed_files.py", line 61, in load_files
print("Error: " + e + " When attempting to open file: " + file)
TypeError: can only concatenate str (not "BadZipFile") to str
Your try/except code looks correct. All user defined exceptions in python should be classes based on Exception. See BaseException and
and Exception in python documentation :
"Exception (..) All user-defined exceptions should also be derived from this class" see also the exception class hierarchy tree at the end of the python doc sesction.
If your python script "crashes" it means one of the library procedures throws an exception which is not based on the Exception class, something that "should not" be. You could look at the Traceback and try catching the offending exception type separately, or find what part of the source code and which library is the cause, fix it and submit a PR. Here are two examples of a good and bad way of deriving own exceptions
class MyBadError(BaseException):
"""
my bad exception, do not make yours that way
"""
pass
instead of recommended
class MyGoodError(Exception):
"""
exception based on the Exception
"""
pass
Where and what exactly fails is a bit of mystery still but the problems with your exception from the Traceback is not new, see zipfile.BadZipfile issue in pandas discussion. Note that xlrd used by pandas to read Excel workbooks data is currently a "no-maintainer-ware" declaration about xlrd from the authors and in case of any issues the recommendation is to use openpyxl instead or fix any issues yourself (pandas maintainers are doing pontius pilate on that, but happily use xlrd as a dependency). I suggest you catch the BadZipfile as a special known corruption error separately from all other exceptions, see python error handling tutorial for example code (you probably already have seen it, this is for other readers). If that does not work I can trace it in the source code of your libraries / python modules to the exact offending section and find the culprit, if you reach out directly.

how to use trigger.adddependencies in pyzabbix

i'm a newbie in python and coding,i'm trying to use pyzabbix to add trigger dependecies,but some error occusrs.
When i run
zapi.trigger.addDependencies(triggerid, dependsOnTriggerid)
an error occurs
pyzabbix.ZabbixAPIException: ('Error -32500: Application error., No permissions to referred object or it does not exist!', -32500)
i get the "triggerid" and "dependsOnTriggerid" by trigger.get:
triggerid_info = zapi.trigger.get(filter={'host': 'xx','description': 'xx'},output=['triggerid'], selectDependencies=['description'])
triggerid = triggerid_info[0]['triggerid']
dependsOnTriggerid = trigger_info[0]['dependencies'][0]['triggerid']
The results are as follws:
Traceback (most recent call last): File "E:/10.python/2019-03-07/1.py", line 14, in zapi.trigger.addDependencies(triggerid, dependsOnTriggerid) File "D:\Program Files\Python37\lib\site-packages\pyzabbix__init__.py", line 166, in fn args or kwargs File "D:\Program Files\Python37\lib\site-packages\pyzabbix__init__.py", line 143, in do_request raise ZabbixAPIException(msg, response_json['error']['code']) pyzabbix.ZabbixAPIException: ('Error -32500: Application error., No permissions to referred object or it does not exist!', -32500)
Did i get the wrong triggerid? or the i use the method in a wrong way? Thanks so much
To add a dependancy means that you need to link two different triggers (from the same host or from another one) with a master-dependent logic.
You are trying to add the dependancy triggerid -> dependsOnTriggerid, which is obtained from a supposed existing dependancy (trigger_info[0]['dependencies'][0]['triggerid']), and this makes little sense and I suppose it's the cause of the error.
You need to get both trigger's triggerid and then add the dependancy:
masterTriggerObj = zapi.trigger.get( /* filter to get your master trigger here */ )
dependentTriggerObj = zapi.trigger.get( /* filter to get your dependent trigger here */)
result = zapi.trigger.adddependencies(triggerid=dependentTriggerObj[0]['triggerid'], dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
The method "trigger.addDependencies" need only one parameter,and it should be a dict or some other object/array.The following code solves the problem.
trigger_info = zapi.trigger.get(filter={xx},output=['triggerid'])
trigger_depends_info_193 = zapi.trigger.get(filter={xx},output=['triggerid'])
trigger_dependson_193 = {"triggerid": trigger_info[0]['triggerid'], "dependsOnTriggerid": trigger_depends_info_193[0]['triggerid']}
zapi.trigger.adddependencies(trigger_dependson_193)

P4Python run method does not work on empty folder

I want to search a Perforce depot for files.
I do this from a python script and use the p4python library command:
list = p4.run("files", "//mypath/myfolder/*")
This works fine as long as myfolder contains some files. I get a python list as a return value. But when there is no file in myfolder the program stops running and no error message is displayed. My goal is to get an empty python list, so that I can see that this folder doesn't contain any files.
Does anybody has some ideas? I could not find information in the p4 files documentation and on StackOverflow.
I'm going to guess you've got an exception handler around that command execution that's eating the exception and exiting. I wrote a very simple test script and got this:
C:\Perforce\test>C:\users\samwise\AppData\local\programs\python\Python36-32\python files.py
Traceback (most recent call last):
File "files.py", line 6, in <module>
print(p4.run("files", "//depot/no such path/*"))
File "C:\users\samwise\AppData\local\programs\python\Python36-32\lib\site-packages\P4.py", line 611, in run
raise e
File "C:\users\samwise\AppData\local\programs\python\Python36-32\lib\site-packages\P4.py", line 605, in run
result = P4API.P4Adapter.run(self, *flatArgs)
P4.P4Exception: [P4#run] Errors during command execution( "p4 files //depot/no such path/*" )
[Error]: "//depot/no such path/* - must refer to client 'Samwise-dvcs-1509687817'."
Try something like this ?
import os
if len(os.listdir('//mypath/myfolder/') ) == 0: # Do not execute p4.run if directory is empty
list = []
else:
list = p4.run("files", "//mypath/myfolder/*")

Shelve keeps forgetting the variables it holds

I'm teaching myself how to Python3. I wanted to train my acquired skills and write a command-line backup program. I'm trying to save the default backup and save locations with the Shelve module but it seems that it keeps forgetting the variables I save whenever I close or restart the program.
Here is the main function that works with the shelves:
def WorkShelf(key, mode='get', variable=None):
"""Either stores a variable to the shelf or gets one from it.
Possible modes are 'store' and 'get'"""
config = shelve.open('Config')
if mode.strip() == 'get':
print(config[key])
return config[key]
elif mode.strip() == 'store':
config[key] = variable
print(key,'holds',variable)
else:
print("mode has not been reconginzed. Possible modes:\n\t- 'get'\n\t-'store'")
config.close()
So, whenever I call this function to store variables and call the function just after that, it works perfectly. I tried to access the shelf manually and everything is there.
This is the code used to store the variables:
WorkShelf('BackUpPath','store', bupath)
WorkShelf('Path2BU', 'store', path)
The problem comes when I try to get my variables from the shelf after restarting the script. This code:
config = shelve.open('Config')
path = config['Path2BU']
bupath = config['BackUpPath']
Gives me this error:
Traceback (most recent call last):
File "C:\Python35-32\lib\shelve.py", line 111, in __getitem__
value = self.cache[key]
KeyError: 'Path2BU'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
config['Path2BU']
File "C:\Python35-32\lib\shelve.py", line 113, in __getitem__
f = BytesIO(self.dict[key.encode(self.keyencoding)])
File "C:\Python35-32\lib\dbm\dumb.py", line 141, in __getitem__
pos, siz = self._index[key] # may raise KeyError
KeyError: b'Path2BU
Basically, this is an error I could reproduce by calling ShelveObject['ThisKeyDoesNotExist'].
I am really lost right now. When I try to manually create a shelf, close it and access it again it seems to work (even though I got an error doing that before) now. I've read every post concerning this, I thought about shelf corruption (but it's not likely it happens every time), and I've read my script A to Z around 20 times now.
Thanks for any help (and I hope I asked my question the right way this time)!
EDIT
Okay so this is making me crazy. Isolating WorkShelf() does work perfectly, like so:
import shelve
def WorkShelf(key, mode='get', variable=None):
"""Either stores a variable to the shelf or gets one from it.
Possible modes are 'store' and 'get'"""
config = shelve.open('Config')
if mode.strip() == 'get':
print(config[key])
return config[key]
elif mode.strip() == 'store':
config[key] = variable
print(key,'holds',variable)
else:
print("mode has not been reconginzed. Possible modes:\n\t- 'get'\n\t-'store'")
config.close()
if False:
print('Enter path n1: ')
path1 = input("> ")
WorkShelf('Path1', 'store', path1)
print ('Enter path n2: ')
path2 = input("> ")
WorkShelf('Path2', 'store', path2)
else:
path1, path2 = WorkShelf('Path1'), WorkShelf('Path2')
print (path1, path2)
No problem, perfect.
But when I use the same function in my script, I get this output. It basically tells me it does write the variables to the shelve files ('This key holds this variable' message). I can even call them with the same code I use when restarting. But when calling them after a program reset it's all like 'What are you talking about m8? I never saved these'.
Welcome to this backup manager.
We will walk you around creating your backup and backup preferences
What directory would you like to backup?
Enter path here: W:\Users\Damien\Documents\Code\Code Pyth
Would you like to se all folders and files at that path? (enter YES|NO)
> n
Okay, let's proceed.
Where would you like to create your backup?
Enter path here: N:\
Something already exists there:
19 folders and 254 documents
Would you like to change your location?
> n
Would you like to save this destination (N:\) as your default backup location ? That way you don't have to type it again.
> y
BackUpPath holds N:\
If you're going to be backing the same data up we can save the files location W:\Users\Damien\Documents\Code\Code Pyth so you don't have to type all the paths again.
Would you like me to remember the backup file's location?
> y
Path2BU holds W:\Users\Damien\Documents\Code\Code Pyth
>>>
======== RESTART: W:\Users\Damien\Documents\Code\Code Pyth\Backup.py ========
Welcome to this backup manager.
Traceback (most recent call last):
File "C:\Python35-32\lib\shelve.py", line 111, in __getitem__
value = self.cache[key]
KeyError: 'Path2BU'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "W:\Users\Damien\Documents\Code\Code Pyth\Backup.py", line 198, in <module>
path, bupath = WorkShelf('Path2BU'), WorkShelf('BackUpPath')
File "W:\Users\Damien\Documents\Code\Code Pyth\Backup.py", line 165, in WorkShelf
print(config[key])
File "C:\Python35-32\lib\shelve.py", line 113, in __getitem__
f = BytesIO(self.dict[key.encode(self.keyencoding)])
File "C:\Python35-32\lib\dbm\dumb.py", line 141, in __getitem__
pos, siz = self._index[key] # may raise KeyError
KeyError: b'Path2BU'
Please help, I'm going crazy and might develop a class to store and get variables from text files.
Okay so I just discovered what went wrong.
I had a os.chdir() in my setup code. Whenever Setup was done and I wanted to open my config files it would look in the current directory but the shelves were in the directory os.chdir() pointed to in the setup.
For some reason I ended up with empty shelves in the directory the actual ones were supposed to be. That cost me a day of debugging.
That's all for today folks!

Categories

Resources