Communication between Python and VBA - python

I try to design a programm in VBA (used within a simulation software) which is able to call Python for some calculations and receive the result to proceed. The VBA-Python call will happen many times.
My first idea is based on a text file communication, e.g. something like this:
in VBA:
do something
write text file 'calc_todo.txt' in specific directory
while not exists 'calc_finished.txt':
wait 1 second
read 'calc_finished.txt'
delete'calc_finished.txt'
delete'calc_todo.txt'
do something
write text file 'calc_todo.txt' in specific directory
... repeat
in Python:
do something
while not exists 'calc_todo.txt':
wait 1 second
read 'calc_todo.txt'
do calculations based on 'calc_todo.txt'
write 'calc_finished.txt'
delete'calc_todo.txt'
do something
while not exists 'calc_todo.txt':
wait 1 second
... repeat
I have done something similar in past and unfortunately there are a lot of things I do not like:
fixed waiting time of e.g. 1 second might slow down performance
if something breaks VBA and/or Python will get stuck in a while loop or run in an error
to fix the second issue, error handling with initialisation can be implemented but last time it was a mess
What would be a more professional way on how to handle such communication?

Related

How can I initialise an object to be used in multiple calls Python from the command-line

I have a script I've written that uses a very large object. I load the object with pickle, but it takes quite a few seconds to do so. That's not a big deal if it has to happen once or twice, but I'm hoping to use the code many hundreds or thousands of times!
I think my issue is that I'd like to almost 'leave' the object alive and then be able to call it from command line whenever I need it. I'm reasonably new to Python so I'm not sure how possible that is; sorry if I haven't used the right terminology in my question. I'm writing and running my python in Spyder at the moment, but eventually I'd like to run it on a server, calling the code as and when required.
If your script is looping over the python program, move the loop inside the program.
If on the other hand, you want to be able to use the large object on demand, you probably need a client/server configuration. Thriftpy is a very simple way to achieve this. The thriftpy server will hold the object and the processing logic, and the client will be a command line script that will call the server and pass whatever parameters you need to process the object.

Python: call and run a process with input files

I am looking for a solution to run a process with input files in python:
in my script I call a process using sub-process:
import subprocess as sp
sp.call(['C:\EnergyPlusV8-8-0\EP-Launch.exe'])
So the program I would like to launch is open, but then I need to choose 2 input files and then press the button "Simulate.." to execute the program(Energy Plus).
***comment:
I mean, after those code lines, the interface of the program(Energy Plus) is open, then I choose in that window which input files the program has to use. After that in the same interface of the program I start the simulation. I want to do these steps just in the python code, without interacte with the EnergyPlus interface. I hope I clearify the ambiguities
I would like to do the last steps automatically(knowing the input files location) in the python code.
How can I do this?
You won't be able to do this unless EnergyPlus is providing some kind of API, or you are prepared to write UI manipulation code, which would really depend on the type of application it is. Without more information I'm going to have to say what you want to do is not possible.

Using (conf) file to exchange variables between python scripts?

I'd like to have one python script running (sometimes) that will read button presses (from a wiimote) and update a conf file such that another python script can use that file to adjust lighting brightness.
I'm guessing the best way to have two python scripts exchange/share variables would be to use sockets between the two scripts, or perhaps an intermediate SQL db, yes?
(Edit: I guess I'm looking to share these variables between more than two scripts.)
If so, I expect I'll eventually get there. Ultimately my setup will span multiple Pi nodes, and involve probably hundreds of such variables updating at various frequencies... So I'd like some input on longterm solutions that will handle this at scale.
But I'm relatively new, and was hoping I could just get this relatively simple solution working well enough with ConfigParser to troubleshoot other aspects of the setup for now.
I can get the lighting script to read my conf file variable and take that 10-bit int and adjust the lighting just fine.
But I'm having trouble with the wiimote script.
The basic code in question gets the existing int from the conf file just fine, increments it by one (when I press the "+" button on the wiimote for example) just fine, and even writes the new int back to the conf file just fine... once.
Upon a second button press I get an "argument of type 'int' is not iterable" error.
if (buttonPress):
b = config.getint('levels', 'ch00')
b += 1
config.set('levels', 'ch00', b)
with open(wiimote.conf, "w") as config_file:
config.write(config_file)
Socket could be a good idea to exchange data between to Python programs, it's lightweight and doesn't require anything else than Python to run. It can also handle large amount of data without hassle.

Run python code from a certain point

I am going some data analysis with python, and it involves reading data at the beginning of the script. I am currently debugging it, and it is cumbersome to wait for the data file to read each time. Is there any way that I can do something similar to a breakpoint which python will not need to read the data each time? It would just begin with the code below reading the data.
It sounds from your question like you have some lines at the beginning of a script which you do not want to process each time you run the script. That particular scenario is not really something that makes a lot of sense from a scripting point of view. Scripts are read from the top down unless you call a function or something. With that said, here is what I'm gathering you want your workflow to be like:
Do some time consuming data loading (once)
Try out code variations until one works
Be able to run the entire thing when you're done
If that's accurate, I suggest 3 options:
If you don't need the data that's loaded from step 1 in the specific code you're testing, just comment out the time consuming portion until you're done with the new code
If you do need the data, but not ALL of the data to test your new code, create a variable that looks like a small subset of the actual data returned, comment out the time consuming portion, then switch it back when complete. Something like this:
# data_result = time_consuming_file_parser()
data_result = [row1, row2, row3]
# new code using data_result
Finally, if you absolutely need the full data set but don't want to wait for it to load every time before you make changes, try looking into pdb or Python DeBugger. This will let you put a breakpoint after your data load and then play around in the python shell until you are satisfied with your result.
import pdb
pdb.set_trace()

python and another program writing to the same file

I have noticed that python always remembers where it finished writing in a file and continues from that point.
Is there a way to reset that so that if the files is edited by another program that removes certain text and ads another python will not fill the gaps with NULL when it does next write?
I have the file open in the parent and the threading children are writing to it. I used flush to ensure after write the data is physically written to the file, but that is only good to do that.
Is there another function I seem to miss that will make python append properly?
One safe thing, OS independent, and reliable is certainly to close the file, and open it again on writting.
If the performance hindrance due to that is unacceptable, you could try to use "seek" to move to the end of file before writing. I just did some naive testing in the interactive console, and indeed, using file.seek(0, os.SEEK_END) before writing worked.
Not that I think having two processes writing to the same file could be safe under most circumstances -- you will end up in race conditions of some sort doing this. One way around is to implement file-locks, so that one process just write to the file after acquiring the lock. Having this done in the right wya may be thought. So, ceck if your application wpould not be in better place using something to written data carefully built and hardened along the years to allow simultanous update by various processes, like an SQL engine (MySQL or Postgresql).

Categories

Resources