Note:- Wait before you mark my question as duplicate please read it completely.
I wan't to run a python file using another.
I have tried using runpy,os.system & subprocess. The problem with subprocess and os.system command is that it fails for systems which have python2 and python3 both installed if i just run with python. If i run it with python3 i fails for people having single installation.
The problem with runpy is that it does not work according to my needs.
The following is run my directory structure
test\
average\
average.py
average_test.py
many similar directories like average...
run_tests.py
The content of average is
def average(...args):
# Do something
The content of average_test.py
from average import average
def average_test():
assert average(1,2,3) == 2
Now if i use runpy.run_path it throws a ImportError saying average is not a module. The os.system and subprocess.call works perfectly but I hope my "testing_framework" will be used by many so I can't use the above two functions. Isn't there any other way to do it. I have researched the whole of SO and google but didn't find a solutions.
Also sys.path.append/insert will not help as I can't tell my "users" to add this to every file of theirs.
Is there no easy way to do it? I mean like pytest accomplishes this so there must be a way.
Thank you moderators for reading my question.
EDIT I forgot to mention that I wan't the code to be run in if __name__ == '__main__' block too and I have also tried using a snippet from another SO answer which fails too. The snippet was
def exec_full(filepath):
global_namespace = {
"__file__": filepath,
"__name__": "__main__",
}
with open(filepath, 'rb') as file:
exec(compile(file.read(), filepath, 'exec'), global_namespace)
Please note that the directory structure was just an example the user may have a different code/directory structure.
NOTE:- I found the answer. I needed to do subprocess.call([sys.executable,file_path]). sys.executable returns the path for the python executable file for the current version.
Create an empty __init__.py in average folder
And then try to import
from average import average
it would work like charm :)
test\
average\
average.py
average_test.py
__init__.py
many similar directories like average...
run_tests.py
Related
Is it possible to have a Makefile grabing arguments from either config.ini or config.yml file?
Let's consider this example, we have a python main.py file which is written as a CLI. Not we do not want users to be filling arguments to a python CLI in terminal so we have an example config.ini file with the arguments:
PYTHON FILE:
import typer
def say_name(name:str):
print('runnig the code')
print(f'Hello there {name}')
if __name__ == "__main__":
typer.run(say_name)
config.ini FILE:
[argument]
name = person
Makefile FILE:
run_code:
python main.py ${config.ini.argument.name}
Is it possible to have a project infrastructure like this?
I am aware that Spacy project does exactly this. However I would like to some something like those even outside NLP project without the need of using spacy.
You need to find, or write, a tool which will read in your .ini file and generate a set of makefile variables from it. I don't know where you would find such a thing but it's probably not hard to write one using a python module that parses .ini files.
Suppose you have a script ini2make that will do this, so that if you run:
ini2make config.ini
it will write to stdout makefile variable assignment lines like this:
config.ini.argument.name = person
config.ini.argument.email = person#somewhere.org
etc. Then you can integrate this into your makefile very easily (here I'm assuming you're using GNU make) through use of GNU make's automatic include file regeneration:
include config.ini.mk
config.ini.mk: config.ini
ini2make $< > $#
Done. Now whenever config.ini.mk doesn't exist or config.ini has been changed since config.ini.mk was last generated, make will run the ini2make script to update it then re-execute itself automatically to read the new values.
Then you can use variables that were generated, like $(config.ini.argument.name) etc.
I had this script working for me, before I decided I'm gonna rewrite everything and make it portable.
Without delving too much into the details, there's a central Bash script, which calls 5 other Bash scripts in their own respective folders. I have no intention of porting to Windows anytime soon, as of current this is just for Linux.
The execution path of the central Bash script is:
dos.1/1-init.sh dos.1/
dos.2/1-trace-to-file.sh dos.2/ dos.1/
dos.3/1-recognize-categories.sh dos.3/
dos.4/1-ping-in-groups.sh dos.4/ dos.3/
dos.5/init.sh dos.5/ dos.4/
I run with ./init.sh
Before the script was 'portable' I was using explicit file paths inside each respective script. All was well and good. The program itself is a combination of Bash and Python, and writes to files in one directory, so that they can be manipulated in various ways, before being read back into different parts of the program.
I understand that the fastest way to do this would be to write a monolithic Python script, using subprocess calls for the Bash side of things... However, I am doing it this way to ease maintenance, and (before I started making it 'portable') it was lightning fast.
My issue now is this: each time I have to read text into Python (either from SQL or from file) there's always this added garbage. Up until this point, I have been using sed, awk and Python's .rstrip() function to manage this... Which is all well and good, but this one damn function will not play nice... And I feel there must be a better way.
In bash I call it with:
$prog_dir=$1
$data_dir=$2
$prog_dir/2fast-ping.py $data_dir/group0.txt > $prog_dir/group0_averages.txt
$prog_dir/2fast-ping.py $data_dir/group1.txt > $prog_dir/group1_averages.txt
...
Now I know that I could write to file from within Python, but in this instance I have other reasons not to.
The issue, is that when the 2fast-ping.py script is ran, it reads the text file in with commas and a newline char. I have vigorously checked and I can confirm that the group#.txt files 100% do not contain commas. Here's the Python:
import sys
import subprocess
import select
from concurrent.futures import ThreadPoolExecutor
filename = sys.argv[1]
f = open(filename, "r")
ips = [elem.rstrip('\n') for elem in f]
print(ips)
f.close()
The script goes on to do some work on the IPs afterwards, but this is the painful part. If I call the script direct from CLI: ./2fast-ping.py ../dos.3/group0.txt, the text is processed PROPERLY and the superseding instructions actually function. But, when called from the first init script, the program basically sh*ts itself because each line is read in with commas. It works until the point where it starts to use the processed info, then:
<actual IP would be here>
ping: ('##.###.###.###',): Name or service not known
Of course, the issue is the ('',) But, Python is adding that in, and I don't know how to stop it :(
Any ideas?
Python code was okay, just passing an additional / with the argument :(
I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!
Working with scientific data, specifically climate data, I am constantly hard-coding paths to data directories in my Python code. Even if I were to write the most extensible code in the world, the hard-coded file paths prevent it from ever being truly portable. I also feel like having information about the file system of your machine coded in your programs could be security issue.
What solutions are out there for handling the configuration of paths in Python to avoid having to code them out explicitly?
One of the solution rely on using configuration files.
You can store all your path in a json file like so :
{
"base_path" : "/home/bob/base_folder",
"low_temp_area_path" : "/home/bob/base/folder/low_temp"
}
and then in your python code, you could just do :
import json
with open("conf.json") as json_conf :
CONF = json.load(json_conf)
and then you can use your path (or any configuration variable you like) like so :
print "The base path is {}".format(CONF["base_path"])
First off its always good practise to add a main function to go with each class to test that class or functions in the file. Along with this you determine the current working directory. This becomes incredibly important when running python from a cron job or from a directory that is not the current working directory. No JSON files or environment variables are then needed and you will obtain interoperation across Mac, RHEL and Debian distributions.
This is how you do it, and it will work on windows also if you use '\' instead of '/' (if that is even necessary, in your case).
if "__main__" == __name__:
workingDirectory = os.path.realpath(sys.argv[0])
As you can see when you run your command, the working directory is calculated if you provide a full path or relative path, meaning it will work in a cron job automatically.
After that if you want to work with data that is stored in the current directory use:
fileName = os.path.join( workingDirectory, './sub-folder-of-current-directory/filename.csv' )
fp = open( fileName,'r')
or in the case of the above working directory (parallel to your project directory):
fileName = os.path.join( workingDirectory, '../folder-at-same-level-as-my-project/filename.csv' )
fp = open( fileName,'r')
I believe there are many ways around this, but here is what I would do:
Create a JSON config file with all the paths I need defined.
For even more portability, I'd have a default path where I look for this config file but also have a command line input to change it.
In my opinion passing arguments from command line would be best solution. You should take a look at argparse . This allows you to create nice way to handle arguments from the command line. for example:
myDataScript.py /home/userName/datasource1/
I am writing a Python script with the following objectives:
Starting from current working directory, change directory to child directory 'A'
Make slight adjustments to a fort.4 file
Run a Fortran binary file (the syntax of which is ../../../../ continuing until I hit the folder containing the binary); return to 2. until my particular objective is complete, then
Back out of child directory to parent, then enter another child directory and return to 2. until I have iterated through all the folders in question.
The code is coming along well. I am having to rely heavily upon Python's OS module for the directory work. However, I have never had any experience a) making minor adjustments of a file using python and b) running an executable. Could you guys give me some ideas on Python modules, direct me to a similar stack source etc, or perhaps give ways that this can be accomplished? I understand this is a vague question, so please ask if you do not understand what I am asking and I will elaborate. Also, the changes I have to make to this fort.4 file are repetitive in nature; they all happen at the same position in the file.
Cheers
EDIT::
entire fort.4 file:
file_name
movie1.dat !name of a general file the binary reads
nbr_box ! line3-8 is general info
2
this_box
1
lrdf_bead
.true.
beadid1
C1 !this is the line I must change
beadid2
F4 !This is a second line I must change
lrdf_com
.false.
bin_width
0.04
rcut
7
So really, I need to change "C1" to "C2" for example. The changes are very insignificant to make, but I must emphasize the fact that the main fortran executable reads this fort.4, as well as this movie1.dat file that I have already created. Hope this helps
Ok so there is a few important things here, first we need to be able to manage our cwd, for that we will use the os module
import os
whenever a method operates on a folder it is important to change directories into the folder and back to the parent folder. This can also be achieved with the os module.
def operateOnFolder(folder):
os.chdir(folder)
...
os.chdir("..")
Now we need to do some method for each directory, that comes with this,
for k in os.listdir(".") if os.path.isdir(k):
operateOnFolder(k)
Finally in order to operate on some preexisting FORTRAN file we can use the builtin file operators.
fileSource = open("someFile.f","r")
fileText = fileSource.read()
fileSource.close()
fileLines = fileText.split("\n")
# change a line in the file with -> fileLines[42] = "the 42nd line"
fileText = "\n".join(fileLines)
fileOutput = open("someFile.f","w")
fileOutput.write(fileText)
You can create and run your executable output.fx from source.f90::
subprocess.call(["gfortran","-o","output.fx","source.f90"])#create
subprocess.call(["output.fx"]) #execute