Run a C# application from python script - python

I've just about finished coding a decently sized disease transmission model in C#. However, I'm fairly new to .NET and am unsure how to proceed. Currently I just double-click on the .exe file and the model imports config setting from text files, does its thing, and outputs the results into a text file.
What I would like to do next is write a Python script to do the following:
Run the simulation N times (N > 1000)
After each run rename the output file and store (i.e. ./output.txt -> ./acc/outputN.txt)
Aggregate, parse, and analyze the outputs
Output the result in some clean format (possibly excel)
The majority of my programming experience to date has been in C/C++ on linux. I'm fairly confident about the last two items; however, I have no idea how to proceed for the first two. Here are some specific questions I'd like advice on:
What is the easiest/best way to run my C# .exe from a python script?
Does anyone have advice on the best way to do filesystem operations in Python on a Windows system?
Thanks!

As of Python 2.6+ you should be using the subprocess module: (Docs)
import subprocess
for v in range(1000):
cmdLine = r"c:\path\to\my\app.exe"
subprocess.Popen(subprocess)
subprocess.Popen(r"move output.txt ./acc/output-%d.txt" % (v))

The answer to your problems can be found in 'os' in the python standard library. Documentation for doing various operations, such as handling files and starting processes, can be found here.
Process management (Running your C# program) can be found here and file operations are here.
EDIT: Actually, instead of the above process link, you should use the subprocess module.

Related

How to pass an R variable to a Python variable in Pycharm?

I am new to Pycharm; however, I want to take advantage of my R and Python knowledge. I am a big fan of both languages, and I am constantly learning more about them.
I am hoping to pass an R variable to a Python variable, similar to Jupyter Notebook.
I could not find any example code anywhere of doing this.
R code
x <- 5
python code
# Some conversion method needs to be added
print(x)
Python Console
>>>5
This is possible because Jupyter provides its own Kernel that code runs in. This kernel is able to translate variables between languages.
Pycharm does not provide a kernel, and instead executes Python code directly through an interpreter on your system. It's similar to doing python my_script.py. AFAIK vanilla Pycharm does not execute R at all.
There are plugins for Pycharm that support R and Jupyter notebooks. You might be able to find one that does what you want.
I usually solve this problem by simply adhering to the Unix philosophy:
Rscript foo.R | python bar.py
where foo.R prints the data to STDOUT and bar.py expects to read it from STDIN (you could also read/write from files). This approach of course requires you to define a clear interface of data between the scripts, rather than passing all the variables indiscriminately, which I don't personally mind because I do that anyway as a matter of design best practices.
There are also packages like reticulate which will allow you to call one language from the other. This might be more useful if you like to switch languages a lot.
Thanks for the above answer! That is great. I did find solution that could help other people that use pycharm.
If you have installed the R plug in option, you can use the following code.
Python code to save to feather file:
```
import pyarrow.feather as feather
data = pd.read_csv('C:/Users/data.csv')
feather.write_feather(data, 'C:/Users/data.feather')
```
R code to retrieve feather file
```
library(tidyverse)
library(feather)
library(arrow)
data <- arrow::read_arrow('C:/Users/data.feather')
print(data)
```
However, this process seems very similar to writing a file to csv in Python, then uploading the csv into R. There could be some type of lightweight storage difference, speed processing, and language variable agnostic/class translation. Below is the official documentation: Git: https://github.com/wesm/feather
Apache Arrow: https://arrow.apache.org/docs/python/install.html
RStudio: https://www.rstudio.com/blog/feather/

Is there a way to continuously collect output from a Python script I'm running into a c++ program?

So, I'm currently trying to construct a c++ aplication that calls for a python script. The main idea is that the python script runs a loop and prints decisions based on user input. I want the cpp program to be able to wait and read(if there s an output from python). I tried to make them "talk" via a file but it turned out bad. Any ideas?
PS: im calling the script using system("start powershell.exe C:\\python.exe C:\\help.py");
If there is any better way please let me know! Thanks
You could write to a file from python and check every certain amount of time in the c++ program to check for any changes in the file.
No, there is no standard way to capture the output if you start the program using std::system.
The operating system facility that you're looking for is "stream". The standard C++ provides access only to the standard streams, which you can use if you redirect the output of the python program when you start the C++ program. Example shell command:
python help.py > cpp_program
If you do this, then you can continuously read the output from the standard input stream in the C++ program. There is no standard way to create extra streams in C++, although that possibility is typically provided by the operating system.

Utility to manage multiple python scripts

I saw this post on Medium, and wondered how one might go about managing multiple python scripts.
How I Hacked Amazon's Wifi Button
This describes a system where you need to run one or more scripts continuously to catch and react to events in your network.
My question: Let's say I had multiple python scripts that I wanted to do run while I work on other things. What approaches are available to manage these scripts? I have to imagine there is a better way than having a large number of terminal windows running each script individually.
I am coming back to python, and have no formal training in computer programming, so any guidance you can provide will be greatly appreciated.
Let's say I had multiple python scripts that I wanted to do run. What
approaches are available to manage these scripts? I have to imagine
there is a better way than having a large number of terminal windows
running each script individually.
If you have several .py files in a directory that you want to run, without having a specific order, you can do:
import glob
pyFiles = glob.glob('path/*.py')
for pyFile in pyFiles:
execfile(pyFile)
Your system already runs a large number of background processes, with output to the system log or occasionally to a service-specific log file.
A common arrangement for quick and dirty deployments -- where you don't necessarily want to invest in making the scripts robust and well-behaved enough to run as proper services -- is to start the script inside screen or tmux. You can detach when you don't need to be looking at it, and can reattach at any time -- even from a remote login -- to view the output, or to troubleshoot.
Take a look at luigi (I've not used it).
https://github.com/spotify/luigi
These days (five years after the question was asked) a lot of people use docker compose. But that's a little heavy weight depending on what you want to do.
I just saw today the script server of bugy. Maybe it might be a solution for you or somebody else.
(I am just trying to find a tampermonkey script structure for python..)

How do I speed up repeated calls a ruby program (github's linguist) from python?

I'm using github's linguist to identify unknown source code files. Running this from the command line after a gem install github-linguist is insanely slow. I'm using python's subprocess module to make a command-line call on a stock Ubuntu 14 installation.
Running against an empty file: linguist __init__.py takes about 2 seconds (similar results for other files). I assume this is completely from the startup time of Ruby. As #MartinKonecny points out, it seems that it is the linguist program itself.
Is there some way to speed this process up -- or a way to bundle the calls together?
One possibility is to just adapt the linguist program (https://github.com/github/linguist/blob/master/bin/linguist) to take multiple paths on the command-line. It requires mucking with a bit of Ruby, sure, but it would make it possible to pass multiple files without the startup overhead of Linguist each time.
A script this simple could suffice:
require 'linguist/file_blob'
ARGV.each do |path|
blob = Linguist::FileBlob.new(path, Dir.pwd)
# print out blob.name, blob.language, blob.sloc, etc.
end

python to capture output of another windows - GUI program

The situation is like this:
I want to capture the pop-ups of IPmsg.exe in my python program.
There is an easy way of doing it, which is reading from the log file. But I would like to know if this can be done without bringing log files into discussion.
For more on IPmsg.exe: http://ipmsg.org/index.html.en
That was being specific.
Now, what would be a generic approach to capturing the output of a windows based GUI program?
There are generally two ways to talk to GUI programs on Windows, if you hate the log files:
Use their command line interface! I doubt this has one that outputs to stdout as messages come in
Use the win32 api or a wrapper for it to search for specific windows (polling as necessary or installing hooks to find out when they appear) and then grabbing text from them using more api calls. See this question: Get text from popup window
+1 for using the log files by the way, far easier.
You can capture only the output from applications through Python that you start directly from Python e.g. using the subprocess module:
http://docs.python.org/library/subprocess.html
Otherwise you have basically no chance for reading the direct output of other applications.

Categories

Resources