How to reload keynotes using pyRevit - python

I am attempting to modify the extremely helpful open keynotes button script to create a 'reload keynotes' button.
Currently i am trying to use the Reload Method of the KeyBasedTreeEntryTable class.
kt = DB.KeynoteTable.GetKeynoteTable(revit.doc)
kt_ref = kt.GetExternalFileReference()
path = DB.ModelPathUtils.ConvertModelPathToUserVisiblePath(
kt_ref.GetAbsolutePath()
)
reloader = DB.KeyBasedTreeEntryTable.Reload()
if not path:
forms.alert('No keynote file is assigned.')
else:
reloader
This is the error message that i am receiving.
TypeError: Reload() takes exactly 2 arguments (0 given)
I am stuck here and appreciate any help.

You can use Revit API to reload the keynotes, the method KeyBasedTreeEntryTable.Reload just needs a parameter to store warnings thrown during operation, this parameter can be None to make it easy.
Also KeyBasedTreeEntryTable should be an instance, the reload method is not static.
The cool thing is that you don't need to find any KeyBasedTreeEntryTable instance, because the KeynoteTable class inherits from KeyBasedTreeEntryTable, so the Reload method is already available with the kt instance in your script.
(This operation needs a transaction context too, like in the following examples)
Simple way
kt = DB.KeynoteTable.GetKeynoteTable(revit.doc)
t = DB.Transaction(revit.doc)
t.Start('Keynote Reload')
try:
result = kt.Reload(None)
t.Commit()
except:
t.RollBack()
forms.alert('Keynote Reloading : {}'.format(result))
# result can be 'Success', 'ResourceAlreadyCurrent' or 'Failure'
Complete way
kt = DB.KeynoteTable.GetKeynoteTable(revit.doc)
# create results object
res = DB.KeyBasedTreeEntriesLoadResults()
t = DB.Transaction(revit.doc)
t.Start('Keynote Reload')
try:
result = kt.Reload(res) # pass results object
t.Commit()
except:
t.RollBack()
# read results
failures = res.GetFailureMessages()
syntax_err = res.GetFileSyntaxErrors()
entries_err = res.GetKeyBasedTreeEntryErrors()
# res.GetFileReadErrors() returns files errors, should be already in failures Messages
warnings = ''
warnings += '\n'.join([message.GetDescriptionText() for message in failures])
if syntax_err:
warnings += '\n\nSyntax errors in the files :\n'
warnings += '\n'.join(syntax_err)
if entries_err:
warnings += '\nEntries with error :\n'
warnings += '\n'.join([key.GetEntry().Key for key in entries_err])
forms.alert('Keynote Reloading : {}\n{}'.format(result, warnings))

Related

Python crashes when assert fails

I'm trying to use Playwright and pytest to test my website. My code keeps crashing when it fails. Below I've explained the things I've tried and why it keeps crashing. I could hack out a solution that wouldn't crash (initialise test first), but it's sloppy and I know there's a much better way to do it.
My question: When an org is successfully created, a popup appears saying 'Organisation has been updated'. How can I do an assertion that this element has appeared, without crashing if it doesn't? Thank you.
My first attempt. This crashes because page.locator couldn't find the element it was expecting.
test = page.locator("text=Organisation has been updated").inner_html()
assert test != ''
My second attempt. This crashes as test is undefined
try:
test = page.locator("text=Organisation has been updated").inner_html()
except:
pass
assert test != ''
edit - added my pytest code
import pytest
import pyotp
#Super admin can add, edit and delete an organisation
def test_super_admin_add_organisation(page):
page.goto("https://website/login/")
page.locator("[aria-label=\"Email\"]").click()
[...]
page.wait_for_url("https://website.com.au/two-factor-verification/")
# Two-Factor Authentication
totp = pyotp.TOTP("otp")
page.locator("[aria-label=\"Two Factor Pin\"]").fill(totp.now())
with page.expect_navigation():
page.click('button:has-text("Verify")')
page.wait_for_url("https://website.com.au/organisations/")
page.locator("button:has-text(\"add\")").click()
page.locator("[aria-label=\"Organisation\"]").click()
page.locator("[aria-label=\"Organisation\"]").fill("test_org")
[...]
page.locator("button:has-text(\"save\")").click()
try:
org_added = ''
org_added = page.locator("text=Organisation has been updated. close").inner_html()
except:
pass
assert org_added != ''

How to dynamically reload function in Python?

I'm trying to create a process that dynamically watches jupyter notebooks, compiles them on modification and imports them into my current file, however I can't seem to execute the updated code. It only executes the first version that was loaded.
There's a file called producer.py that calls this function repeatedly:
import fs.fs_util as fs_util
while(True):
fs_util.update_feature_list()
In fs_util.py I do the following:
from fs.feature import Feature
import inspect
from importlib import reload
import os
def is_subclass_of_feature(o):
return inspect.isclass(o) and issubclass(o, Feature) and o is not Feature
def get_instances_of_features(name):
module = __import__(COMPILED_MODULE, fromlist=[name])
module = reload(module)
feature_members = getattr(module, name)
all_features = inspect.getmembers(feature_members, predicate=is_subclass_of_feature)
return [f[1]() for f in all_features]
This function is called by:
def update_feature_list(name):
os.system("jupyter nbconvert --to script {}{} --output {}{}"
.format(PATH + "/" + s3.OUTPUT_PATH, name + JUPYTER_EXTENSION, PATH + "/" + COMPILED_PATH, name))
features = get_instances_of_features(name)
for f in features:
try:
feature = f.create_feature()
except Exception as e:
print(e)
There is other irrelevant code that checks for whether a file has been modified etc.
I can tell the file is being reloaded correctly because when I use inspect.getsource(f.create_feature) on the class it displays the updated source code, however during execution it returns older values. I've verified this by changing print statements as well as comparing the return values.
Also for some more context the file I'm trying to import:
from fs.feature import Feature
class SubFeature(Feature):
def __init__(self):
Feature.__init__(self)
def create_feature(self):
return "hello"
I was wondering what I was doing incorrectly?
So I found out what I was doing wrong.
When called reload I was reloading the module I had newly imported, which was fairly idiotic I suppose. The correct solution (in my case) was to reload the module from sys.modules, so it would be something like reload(sys.modules[COMPILED_MODULE + "." + name])

Control Flow issue: Python function called but not executed

I have the strangest problem I have ever met in my life.
I have a part of my code that looks like this:
class AzureDevOpsServiceError(Exception):
pass
skip = ["auto"]
def retrieve_results():
print(variable_not_defined)
... # some useful implementation
if not "results" in skip:
try:
print("before")
retrieve_results()
print("after")
except AzureDevOpsServiceError as e:
print(f"Error raised: {e}")
Obviously, this shall raise an error because variable_not_defined is, well, not defined.
However, for some strange reasons, the code executes correctly and prints
before
after
I have tried to call the function with an argument (retrieve_results(1234)) or adding an argument in the function (def retrieve_results(arg1) and retrieve_results()): both modifications will trigger an exception, so obviously the function is called.
Anyone has got a similar issue and knows what happens?
FYI: this is actually what my implementation looks like:
from azure.devops.exceptions import AzureDevOpsServiceError
import logging
def _retrieve_manual_results(connect: Connectivity, data: DataForPickle) -> None:
"""Retrieve the list of Test Results"""
print("G" + ggggggggggggggggggggggggggggggggggggg)
logger = connect.logger
data.run_in_progress = [165644]
if __name__ == "__main__":
p = ...
connect = ...
data = ...
if not "results" in p.options.skip:
try:
print("........B.........")
_retrieve_manual_results(connect, data)
print("........A.........")
except AzureDevOpsServiceError as e:
logging.error(f"E004: Error while retrieving Test Results: {e}")
logging.debug("More details below...", exc_info=True)
As highlighted by #gmds, it was a problem of cache.
Deleting the .pyc file didn't do much.
However, I have found a solution:
Renaming the function (e.g. adding _)
Running the program
Renaming back (i.e. removing _ in the previous example)
Now, the issue is solved.
If anyone knows what is going behind the scene I am very interested.

Django server crashes with exit codes 139, 77

Foreword
Okay, I have a really complex perfomance issue. I'm building a content managment system and one of the features should be generating tons of .docx files with different templates. I started with Webodt + Abiword. But then templates got too complex, so I had to swith my backend to Templated-docs + LibreOffice. So this is where my problems started.
I use:
Python 2.7.12
Django==1.8.2
templated-docs==0.2.9
LibreOffice 5.1.5.2
Ubuntu 16.04
The actual problem
I have an API which handles .docx render. I will show one of views, as an example, they are pretty similar:
#permission_classes((permissions.IsAdminUser,))
class BookDocxViewSet(mixins.RetrieveModelMixin, viewsets.GenericViewSet):
def retrieve(self, request, *args, **kwargs):
queryset = Pupils.objects.get(id=kwargs['pk'])
serializer = StudentSerializer(queryset)
context = dict(serializer.data)
doc = fill_template('crm/docs/book.ott', context, output_format='docx')
p = u'docs/books/%s/%s_%s_%s.doc' % (datetime.now().date(), context[u'surname'], context[u'name'], datetime.now().date())
with open(doc, 'rb') as f:
content = f.read()
path = default_storage.save(p, ContentFile(content))
f.close()
return response.Response(u'/media/' + path)
When I call it the first time, it creates a .docx file, saves it to my default_storage and then returns me a download link. But when I try to do it again, of do it with another method (which works with another template and context), my server just crashes without any logs. The last thing I see is either
Process finished with exit code 77 if I call it with a little delay (more then one second)
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) if call my method for the second time right away (in less than one second)
I tried to use debuger -- it said that my server crashes on this line:
doc = fill_template('crm/docs/book.ott', context, output_format='docx')
I bet what happens is:
When I call my method the first time templated_docs starts LibreOffice backend, and then does not stop it
When I call my method the second time templated_docs tries to start LibreOffice backend again, but it is already busy.
Questions
How do I debug LibreOffice to prove / refute my theory? (I guess I need to debug templated_docs instead)
Why do I get different exit codes depending of delay?
Is it enough base to oppen an issue on GitHub?
How do I fix that?
UPD
It is not an issue of REST Framework or not using FileResponce().
I already tried to test it with regular view.
def get_document(request, *args, **kwargs):
context = Pupils.objects.get(id=kwargs['pk']).__dict__
doc = fill_template('crm/docs/book.ott', context, output_format='docx')
p = u'%s_%s_%s' % (context[u'surname'], context[u'name'], datetime.now().date())
return FileResponse(doc, p)
And the problem is same.
UPD 2
Okay. This line is chashing my server:
# pylokit/lokit.py
self.lokit = lo.libreofficekit_hook(six.b(lo_path))
Okay, that was a bug in templated_docs. I was right, it happens because templated_docs is trying to start LibreOffice twice. As it said in pylokit documentation:
The use of _exit() instead of default exit() is required because in
some circumstances LibreOffice segfaults on process exit.
It means the process that used pylockt should be killed after. But we cannot kill Django server. So I decided to use multiprocessing:
# templated_docs/__init__.py
if source_extension[1:] != output_format:
lo_path = getattr(
settings,
'TEMPLATED_DOCS_LIBREOFFICE_PATH',
'/usr/lib/libreoffice/program/')
def f(conn):
with Office(lo_path) as lo:
conv_file = NamedTemporaryFile(delete=False,
suffix='.%s' % output_format)
with lo.documentLoad(str(dest_file.name)) as doc:
doc.saveAs(conv_file.name)
os.unlink(dest_file.name)
conn.send(conv_file.name)
conn.close()
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
conv_file_name = parent_conn.recv()
p.join()
return conv_file_name
else:
return dest_file.name
I oppened an issue and made a pull request.

Error while working with excel using python

while my script is updating one excel same time if i am going to do any other work manually with another excel error occurs i am using dispatch
from win32com.client import Dispatch
excel = Dispatch('Excel.Application')
excel.Visible = True
file_name="file_name.xls"
workbook = excel.Workbooks.Open(file_name)
workBook = excel.ActiveWorkbook
sheet=workbook.Sheets(sheetno)
I am geting error like this
(, com_error(-2147418111, 'Call was rejected by callee.', None, None)
Is there is any way to overcome it ..can i update another excel without geting error..
I encountered this same issue recently. While it sounds like there can be multiple root causes, my situation was occurring because Python was making subsequent calls too quickly for Excel to keep up, particularly with external query refreshes. I resolved this intermittent "Call was rejected by callee" error by inserting time.sleep() between most of my calls and increasing the sleep argument for any calls that are particularly lengthy (usually between 7-15 seconds). This allows Excel the time to complete each command before Python issued additional commands.
This error occurs because the COM object you're calling will reject an external call if it's already handling another operation. There is no asynchronous handling of calls and the behavior can seem random.
Depending on the operation you'll see either pythoncom.com_error or pywintypes.com_error. A simple (if inelegant) way to work around this is to wrap your calls into the COM object with try-except and, if you get one of these access errors, retry your call.
For some background see the "Error Handling" section of the chapter 12 excerpt from Python Programming on Win32 by Mark Hammond & Andy Robinson (O'Reilly 2000).
There's also some useful info specifically about Excel in Siew Kam Onn's blog post "Python programming with Excel, how to overcome COM_error from the makepy generated python file".
I have been struggling with the same problem, but now I have made a solution that works for me so far.
I created a class, ComWrapper, that I wrap the Excel COM object in. It automatically wraps every nested object and call in ComWrapper, and unwraps them when they are used as arguments to function calls or assignments to wrapped objects. The wrapper works by catching the "Call was rejected by callee"-exceptions and retrying the call until the timeout defined at the top is reached. If the timeout is reached, the exception is finally thrown outside the wrapper object.
Function calls to wrapped objects are automatically wrapped by a function _com_call_wrapper, which is where the magic happens.
To make it work, just wrap the com object from Dispatch using ComWrapper and then use it as usual, like at the bottom of the code. Comment if there are problems.
import win32com.client
from pywintypes import com_error
import time
import logging
_DELAY = 0.05 # seconds
_TIMEOUT = 60.0 # seconds
def _com_call_wrapper(f, *args, **kwargs):
"""
COMWrapper support function.
Repeats calls when 'Call was rejected by callee.' exception occurs.
"""
# Unwrap inputs
args = [arg._wrapped_object if isinstance(arg, ComWrapper) else arg for arg in args]
kwargs = dict([(key, value._wrapped_object)
if isinstance(value, ComWrapper)
else (key, value)
for key, value in dict(kwargs).items()])
start_time = None
while True:
try:
result = f(*args, **kwargs)
except com_error as e:
if e.strerror == 'Call was rejected by callee.':
if start_time is None:
start_time = time.time()
logging.warning('Call was rejected by callee.')
elif time.time() - start_time >= _TIMEOUT:
raise
time.sleep(_DELAY)
continue
raise
break
if isinstance(result, win32com.client.CDispatch) or callable(result):
return ComWrapper(result)
return result
class ComWrapper(object):
"""
Class to wrap COM objects to repeat calls when 'Call was rejected by callee.' exception occurs.
"""
def __init__(self, wrapped_object):
assert isinstance(wrapped_object, win32com.client.CDispatch) or callable(wrapped_object)
self.__dict__['_wrapped_object'] = wrapped_object
def __getattr__(self, item):
return _com_call_wrapper(self._wrapped_object.__getattr__, item)
def __getitem__(self, item):
return _com_call_wrapper(self._wrapped_object.__getitem__, item)
def __setattr__(self, key, value):
_com_call_wrapper(self._wrapped_object.__setattr__, key, value)
def __setitem__(self, key, value):
_com_call_wrapper(self._wrapped_object.__setitem__, key, value)
def __call__(self, *args, **kwargs):
return _com_call_wrapper(self._wrapped_object.__call__, *args, **kwargs)
def __repr__(self):
return 'ComWrapper<{}>'.format(repr(self._wrapped_object))
_xl = win32com.client.dynamic.Dispatch('Excel.Application')
xl = ComWrapper(_xl)
# Do stuff with xl instead of _xl, and calls will be attempted until the timeout is
# reached if "Call was rejected by callee."-exceptions are thrown.
I gave the same answer to a newer question here:
https://stackoverflow.com/a/55892457/2828033
I run intensive excel sheets which consistently show this (blocking) error whilst the calculation cycle runs.
The solution is to use a for loop.
I provide the section of my code solution which works:
# it failed, keep trying
attempt_number = 0
reading_complete = False
while reading_complete==False:
try:
workbook = xw.Book(file_to_read)
reading_complete = True
print('file read...')
except:
reading_complete = False
attempt_number += 1
print('attempt:', attempt_number)
if attempt_number > 5:
print('no good: exiting the process')
exit()
Where:
The file_to_read is the full path and name of the excel workbook.
The attempt_number variable is set to limit the number of attempts.

Categories

Resources