I want to automate a specific task, and I have a bash file that I want to read data from user arguments by running the script
bash my_script.sh /etc/??? path/dst
till now, I'm getting the data in my script by access to positional parameters ($1, $2) and variables, and that's fine.
But inside the script, I want to run a python program. So...
python test.py arg1 arg2 arg3
the things is, I have to read the data via python and access the output in my_script.sh
There is a constraint that I shouldn't create a file on the system. So I wonder about using export, but export is volatile since it's storing the variable till that process is opened and when I get back to my_script.sh, there isn't any clue of that data also I have no privilege to write my variable, and it's data on ~/.bashrc.
Also, I have read this and this, but they don't work the way I wanted.
if you regard the way I'm doing it, is wrong, let me know, please.
Thanks to #chepners comment, the solution was using $() instead of using export.
As it's mentioned in comments :
export only passes information from a parent to a child process, not the other direction.
So one of the correct ways to get an output from other application within your script is as follows:
output=$(python test.py arg1 arg2 arg3)
Related
I wrote a python script that works. The first line of my script is reading an hdf5 file
readFile = h5py.File('FileName_00','r')
After reading the file, my script does several mathematical operations, successfully working. In the output I got function F.
Now, I want to repeat the same script for different files. Basically, I only need to modify FileName_00 by FimeName_01 or ....FileName_10. I was thinking to create a script that call this script!
I never wrote a script that call another script, so any advice would be appreciable.
One option: turn your existing code into a function which takes a filename as an argument:
def myfunc(filename):
h5py.file(filename, 'r')
...
Now, after your existing code, call your function with the filenames you want to input:
myfunc('Filename_00')
myfunc('Filename_01')
myfunc('Filename_02')
...
Even more usefully, I definitely recommend looking into
if(__name__ == '__main__')
and argparse (https://docs.python.org/3/library/argparse.html) as jkr noted.
Also, if you put your algorithm in a function like this, you can import it and use it in another Python script. Very useful!
Although there are certainly many ways to achieve what you want without multiple python scripts, as other answerers have shown, here's how you could do it.
In python we have this function os.system (learn more about it here: https://docs.python.org/3/library/os.html#os.system). Simply put, you can use it like this:
os.system("INSERT COMMAND HERE")
Replacing INSERT COMMAND HERE with the command you use to run your python script. For example, with a script named script.py you could conceivably (depending on your environment) include the following line of code in a secondary python script:
os.system("python script.py")
Running the secondary python script would run script.py as well. FWIW, I don't necessarily think this is the best way to accomplish your goal -- I tend to agree with DraftyHat's solution in most circumstances. But in case you were curious, this is certainly an option in python. I've used this functionality in the past, albeit not to run other python scripts, but to execute commands in the shell. Hope this helps!
I try to create a python program which will deobfuscate powershell malware, which uses IEX. My python program is actually hooking the IEX function and instead of running the desired string, it will print the string.
Now my problem is that I have some .ps1 scripts (for examples 1.ps1, 2.ps1, etc..) and I want to run all of them under the same session so that by this, all the local variables created by 1.ps1 script, the 2.ps1 script will be able to use...
Now I tried so many ways, First I tried with subprocess but it always creates a new session for every time I enter a command (which is the path of the .ps1 file). Then I found this project at GitHub:
https://gist.github.com/MarkBaggett/a7c10195b2626c78009bf73bcdb6db20
Which is really awesome and did work but still, it seems that when I run the command ./1.ps1 it still does not store the local variables at the session (Maybe it opens a new one when running a script).
I tried to do also "Get-Content 1.ps1 | iex" but then it crashes since I have functions there for example:
function Invoke-Expression()
{
param(
[Parameter( `
Mandatory=$True, `
Valuefrompipeline = $True)]
[String]$Command
)
Write-Host $Command
}
taken from PSDecode project:
https://github.com/R3MRUM/PSDecode/blob/master/PSDecode.psm1#L28
Anyway, any ideas about how I can do this? I have those scripts on my desktop but no idea how to run them at the same session so they will use the same local variables...
Two things that I did though but they really suck:
1. Convert all the scripts to 1 script and run it, but in next run that I will use this program I might have 100 scripts or more and I don't really want to do this.
2. I can save the local variables from each script and load it to another yet I want to use it in the worst case scenario and still didn't get there.
Thank you so much for helping me and sorry for my grammar my English is not my mother language as you can see :)
Maybe you're looking for dot sourcing:
Runs a script in the current scope so that any functions, aliases, and variables that the script creates are added to the current scope.
PowerShell
. c:\scripts\sample.ps1
If so dot-source your ps1 files, and call the functions inside them.
Hope that helps.
I have a script that adds variables to the environment. That script is called through
subprocess.call('. myscript.sh', shell=True)
Is there a way I can get the modified environment and use it on my next subprocess call?This questions shows you can get the output of one call and chain it to another call Python subprocess: chaining commands with subprocess.run.Is there something similar with passing the environment?
You'll have to output the variables' content somehow. You're spawning a new process which will not propagate the environment variables back, so your python app will not see those values.
You could either make the script echo those to some file, or to the standard output if possible.
(Technically, it would be possible to stop the process and extract the values if you really wanted to hack that, but it's a bad idea.)
I am working in Windows, and just learning to use python (python 2.7).
I have a bunch of script files ("file1.script", "file2.script", "file3.script"....) that are executed in TheProgram.exe. Python has already given me the ability to automatically create these script files, but now I want to successively run each of these script files, back-to-back, in TheProgram.exe.
So far I have figured out how to use the subprocess module in python to start "TheProgram.exe" in a new process (child process?) and load the first script file as follows:
my_process = subprocess.Popen(["Path to TheProgram.exe", "Path to File1.script"])
As seen, simply "opening" the script file in TheProgram.exe, or passing it as an argument in this case, will execute it. Once File1.script is done, TheProgram.exe generates an output file, and then just sits there. It does not terminate. This is I want, because now I would like to load File2.script in the same process without terminating (file2.script is dependent on file1.script completing successfully), then File3.script etc.
Is this possible? And if so how? I cannot seem to find any documentation or anyone else who has had this problem. If I can provide other information please let me know, I am also new to posting to these forums. Thanks so much for any assistance.
I have a series of scripts I am automating the calling of in python with subprocess.Popen. Basically, I call script A, then script B, then script C and so forth.
Script A sets a bunch of local shell variables, with commands such as set SOME_VARIABLE=SCRIPT_A, set PATH=%SCRIPT_A:/=\;%PATH%.
Script B and C then need to have the effects of this. In unix, you would call script A with "source script_a.sh". The effect lasts in the current command window. However, subprocess.Popen effectively launches a new window (kind of).
Apparently subprocess.Popen is not the command I want to do this. How would I do it?
edit I have tried parsing the file (which is all 'set' statements) and passing them as a dictionary to 'env' in subprocess.Popen, but it doesn't seem to have all worked..
You can use the env argument to Popen with a python dictionary containing the variables then you don't need to run the command that just sets variables.
if 'script A' get generated by another process, you will either need to change the other process so the output file is in a format that you can source (import) into your python script. or write a parser in python that can digest the vars out of 'Script A' setting them within your python script.
If all you want is to call a series of batch files using Python you could create a helper batch file which would call all these batch files like this
call scriptA;
call scriptB;
call scriptC;
and run this helper batch file using subprocess.Popen