I've got a LibreOffice python script that uses serial IO. On my systems, opening a serial port is a very slow process (around 1 second), so I'd like to keep the serial port open, and just send stuff as required.
But LibreOffice python apparently reloads the python framework every time a call is made. Unlike most python implementations, where the process is persistent, and un-enclosed code in a module is run once, when the module is imported.
Is there a way in LibreOffice python to persist objects between calls?
SerialObject=None
def return_global():
return str(SerialObject) #always returns "None"
def init_serial_object():
SerialObject=True
It looks like there is a simple bug. Add global and then it works.
However, it may be that the real problem is your setup. Here is a working example. Put the code in a file called inctest.py under $LIBREOFFICE_USER_DIR/Scripts/python/pythonpath/incmod/.
def init_serial_object():
global SerialObject
SerialObject=True
def moduleVersion():
return "2.0" #change to verify that this is the most recently updated code
The code to call it should be located in the user profile directory (that is, location=user).
from incmod import inctest
inctest.init_serial_object()
msgbox("{}: {}".format(
inctest.moduleVersion(), inctest.return_global()))
Run the script by going to Tools > Macros > Run Macro and find it under My Macros.
Be aware that inctest.py will not always get reloaded. There are two ways to reload it: restart LibreOffice, or force python to reload it by doing del sys.modules[mod].
The moduleVersion() function is not necessary but helps you see when the module is getting reloaded — make changes to that line and then see whether the output changes.
Related
I have a program that, for data security reasons, should never persist anything to local storage if deployed in the cloud. Instead, any input / output needs to be written to the connected (encrypted) storage instead.
To allow deployment locally as well as to multiple clouds, I am using the very useful fsspec. However, other developers are working on the project as well, and I need a way to make sure that they aren't accidentally using local File I/O methods - which may pass unit tests, but fail when deployed to the cloud.
For this, my idea is to basically mock/replace any I/O methods in pytest with ones that don't work and make the test fail. However, this is probably not straightforward to implement. I am wondering whether anyone else has had this problem as well, and maybe best practices / a library exists for this already?
During my research, I found pyfakefs, which looks like it is very close what I am trying to do - except I don't want to simulate another file system, I want there to be no local file system at all.
Any input appreciated.
You can not use any pytest addons to make it secure. There will always be ways to overcome it. Even if you patch everything in the standard python library, the code always can use third-party C libraries which can't be patched from the Python side.
Even if you, by some way, restrict every way the python process can write the file, it will still be able to call the OS or other process to write something.
The only ways are to run only the trusted code or to use some sandbox to run the process.
In Unix-like operating systems, the workable solution may be to create a chroot and run the program inside it.
If you're ok with just preventing opening files using open function, you can patch this function in builtins module.
_original_open = builtins.open
class FileSystemUsageError(Exception):
pass
def patched_open(*args, **kwargs):
raise FileSystemUsageError()
#pytest.fixture
def disable_fs():
builtins.open = patched_open
yield
builtins.open = _original_open
I've done this example of code on the basis of the pytest plugin which is written by the company in which I work now to prevent using network in pytests. You can see a full example here: https://github.com/best-doctor/pytest_network/blob/4e98d816fb93bcbdac4593710ff9b2d38d16134d/pytest_network.py
I recently found out about the new sys.addaudithook in python. I run a service that involves running some untrusted scripts and I want to use the audit hooks to further secure the system. Scripts are running in a isolated container anyway but extra protection doesn't hurt.
Right now my code is this:
import sys
def audithook(event, args):
if event.startswith('os.') or event.startswith('sys.') or event.startswith('socket.') or event.startswith('winreg.') or event.startswith('webbrowser.') or event.startswith('shutil.') or event == "import":
raise RuntimeError("Attempt to break out of sandbox");
# Some code here that explicitly requires filesystem access
sys.addaudithook(audithook)
import untrusted_module
# More code that uses other
# Results need to eventually be saved to a file, that seems to work fine if I open the file before creating the hook then just write to it later. This is why I can't protect the entire program.
However, importing the file triggers quite a few hooks in the process of compiling the other file to bytecode and running it. I want to only prevent file system access inside the script itself and not the process of importing it.
I could try to set it to ignore some number N events before blocking new ones but the number of audit events seems to vary significantly depending on if the script changed or not, and how recent the stored bytecode is.
The entire audithook feature seems to be poorly documented and I can find very little about it online. Does anyone know how to use it to secure 1 module?
I'm building an electronjs python application and I'm using the pythonshell module. The electron application is supposed to log any messages my python script prints to the console, but rather than printing each message when it's supposed to be printed it waits until the script has finished executing and then prints everything. I've tried using sys.stdout.write("message") and then sys.stdout.flush(), but it still doesn't work.
The question I'm linking has a similar problem that I do, but the answer that worked for them didn't work for me on the electron application. It's flushing properly on the python backend, the frontend is what's causing the problem.
Similar question: Python sys.stdout.flush() doesn't work
file.flush() does not necessarily write the file’s data to disk!. you need to Use flush() followed by os.fsync(fd) to ensure this behavior.
see below:
sys.stdout.flush()
os.fsync(sys.stdout.fileno())
os.fsync(fd) documentation from python docs
Force write of file with file descriptor fd to disk. On Unix, this
calls the native fsync() function; on Windows, the MS _commit()
function.
If you’re starting with a Python file object f, first do f.flush(),
and then do os.fsync(f.fileno()), to ensure that all internal buffers
associated with f are written to disk.
I was thinking of how you execute code only once in Python. What I mean is setup code like when you set-up software; it only happens once and remembers you have already set up the software when you start the program again.
So in a sense I only want Python to execute a function once and not execute the function again even if the program is restarted.
you could create a file once set up is complete for example an empty .txt file and then check if it exists when program runs and if not runs setup
to check weather a file exists you can use os.pathlike so
import os.path
if not os.path.exists(file_path):
#run start up script
file = open (same_name_as_file_path, "w") #creates our file to say startup is complete you could write to this if you wanted as well
file.close
In addition to the method already proposed you may use pickle to save boolean variables representing whether some functions were executed (useful if you have multiple checks to carry out)
import pickle
f1_executed=True
f2_executed=False
pickle.dump([f1_executed,f2_executed],open("executed.pkl",mode='wb'))
##### Program Restarted #####
pickle.load(open("executed.pkl",mode='rb'))
If you need a kind of Setup-Script to install a program or to setup your operating system's environment, then I would go even further. Imagine that your setup became inconsistent in the mean-time and the program does not work properly anymore. Then it would be good to provide the user a script to repair that.
If you execute the script the second time, then you can:
either check, if the setup was correct and print an error message to the user, if the setup became inconsistent in the mean-time
or check the setup and repair it automatically, if inconsistent
Just reading a text file or something similar (f.e. storing a key in the registry of windows) may bring you into the situation that the setup became inconsistent, but your setup-script will say that everything is fine, because the text file (or the registry key) has been found.
Furthermore, if doing so, this facilitates also to "uninstall" your program. Since you know exactly what has been changed for installation, you can revert it by an uninstall script.
I'm running a script which loads some huge amount of data using pickles.
For this big amount of data, running the script takes a lot of time which in turn makes it very hard to work with (especially to debug).
For solving the problem above I thought about passing some of the variables defined in the console to the script. This will allow me to load the pickles only one time and just pass it to the script every time I want to use their data.
I tried to find a way to do this but couldn't find any.
Is there any way of doing passing console variables to a script?
Never mind. I can just create a function in the script and then call it from the console instead of having __main__.
For example, for a script A.py, add a function b(params) and then in the console just run
from <pathToA>.A import *
b(params)