I have couple questions regarding running code from multiple files from one .py file
When I import the Files with
from [subfolder] import [First scriptname - without .py]
from [subfolder] import [Second scriptname - without .py]
the scripts start running instantly, like if my scripts have a print code it would look like this after running the combined-scripts file
print("Hi Im Script One")
print("Hi Im Script Two")
now I could put them in functions and run the functions but my files also have some variables that are not inside functions, the question I have is if there is a way to not start the script automatically after import?
Also what happens with variables inside these scripts, are they usable throughout the combined file or do i need to state them with something like "global"?
is there a way to not start the script automaticly after import?
This is what python's if __name__ == "__main__": is for.
Anything outside that (imports, etc.) will be run when the .py file is imported.
what happens with variables inside these scripts, are they usable troughout the combined file or do i need to state them with something like "global"?
They may be best put inside a class, or you can also (not sure how Pythonic/not this is):
from [FirstScriptName_without_.py] import [className_a], [functionName_b], [varName_c]
I'm making a program that detects python files and reads their classes and functions for then to be called. How do I list the functions from another python file without importing it? and how do I call them?
Structure of the program:
Index.py (main py file)
/Content
/Modules
Modules.py
ChickenModule.py (module I want to get functions from)
Modules.py list all the python files from /Modules and stores them on a list.
Index.py calls the functions in Modules.py (Just to keep things less messy)
ChickenModule.py has a Class named ChickenStuff that prints "I'm a chicken" whenever it's self.(something) called.
The goal of the program is for the user to be able to put .py files in modules and when running index.py the functions and classes of said .py files will be listed.
There's a built in library called importlib that might be what you need. I'm assuming that you want classes listed. You'll need inspect for this.
import os
import os.path as path
import importlib
import inspect
# get a list of all files in Content/Modules
filelist = os.listdir("Content/Modules")
# import every file ending in .py
for fname in filelist:
if path.splitext(fname)[1] == 'py':
my_module = importlib.import_module(path.splitext(fname)[0]) # load the module
for _, obj in inspect.getmembers(my_module): # iterate through members
if isinstance(obj, type): # check if members is a class
print(obj)
I have a problem with relative paths in my python 2.7 project. I have two files, let's call them script.py and importedScript.py, which have different directories, because importedScript is in a subfolder.
importedScript.py has a method called openCSV(), which gets imported in script.py with
from subfolder.importedScript import openCSV
This works fine. The method openCSV(filename) has the following code inside:
script_path = os.path.dirname(os.path.abspath(__file__))
filepath = os.path.join(script_path, 'subfolder2/' + filename)
dataset = pd.read_csv(filepath)
This code imports a .csv file from a subfolder. This works also fine, if I run the importedScript.py by itself.
The problem now is, that when I run script.py, the relative path in importedScript.py is generated wrong. For some reasons, the system tries to load the importedScript.py from "subfolder2/" instead of "subfolder/subfolder2".
Does anyone know how to fix this?
Edit: In subfolder2 are different .csv files and I want to open different files from different python files.
you can pass the __file__ variable to the method on call:
def OpenCSV(file):
here = os.path.dirname(os.path.abspath(file))
...etc
can be called by doing OpenCSV(__file__)
Solved see my answer below for anyone who might find this helpful.
I have two scripts a.py and b.py.
In my current directory "C:\Users\MyName\Desktop\MAIN", I run > python a.py.
The first script, a.py runs in my current directory, does something to a bunch of files and creates a new directory (testA) with the edited versions of those files which are simultaneously moved into that new directory. Then I need to run b.py for the files in testA.
As a beginner, I would just copy and paste my b.py script into testA and execute the command again "> python b.py", which runs some commands on those new files and creates another folder (testB) with those edited files.
I am trying to eliminate the hassle of waiting for a.py to finish, move into that new directory, paste b.py, and then run b.py. I am trying to write a bash script that executes these scripts while maintaining my hierarchy of directories.
#!/usr/bin/env bash
python a.py && python b.py
Script a.py runs smoothly, but b.py does not execute at all. There are no error messages coming up about b.py failing, I just think it cannot execute because once a.py is done, b.py does not exist in that NEW directory.
Is there a small script I can add within b.py that moves it into the new directory? I actually tried changing b.py directory paths as well but it did not work.
For example in b.py:
mydir = os.getcwd() # would be the same path as a.py
mydir_new = os.chdir(mydir+"\\testA")
I changed mydirs to mydir_new in all instances within b.py, but that also made no difference...I also don't know how to move a script into a new directory within bash.
As a little flowchart of the folders:
MAIN # main folder with unedited files and both a.py and b.py scripts
|
| (execute a.py)
|
--------testA # first folder created with first edits of files
|
| (execute b.py)
|
--------------testB # final folder created with final edits of files
TLDR: How do I execute a.py and b.py from the main test folder (bash script style?), if b.py relies on files created and stored in testA. Normally I copy and paste b.py into testA, then run b.py - but now I have 200+ files so copying and pasting is a waste of time.
The easiest answer is probably to change your working directory, then call the second .py file from where it is:
python a.py && cd testA && python ../b.py
Of course you might find it even easier to write a script that does it all for you, like so:
Save this as runTests.sh in the same directory as a.py is:
#!/bin/sh
python a.py
cd testA
python ../b.py
Make it executable:
chmod +x ./runTests.sh
Then you can simply enter your directory and run it:
./runTests.sh
I managed to get b.py executing and producing the testB folder where I need it to, while remaining in the MAIN folder. For anyone who might wonder, at the beginning of my b.py script I would simply use mydir = os.getcwd() which normally is wherever b.py is.
To keep b.py in MAIN while making it work on files in other directories, I wrote this:
mydir = os.getcwd() # would be the MAIN folder
mydir_tmp = mydir + "//testA" # add the testA folder name
mydir_new = os.chdir(mydir_tmp) # change the current working directory
mydir = os.getcwd() # set the main directory again, now it calls testA
Running the bash script now works!
In your batch file, you can set the %PYTHONPATH% variable to the folder with the Python module. This way, you don't have to change directories or use pushd to for network drives. I believe you can also do something like
set "PYTHONPATH=%PYTHONPATH%;c:\the path\to\my folder\which contains my module"
This will append the paths I believe (This will only work if you already have set %PYTHONPATH% in your environment variables).
If you haven't, you can also just do
set "PYTHONPATH=c:\the path\to\my folder\which contains my module"
Then, in the same batch file, you can do something like
python -m mymodule ...
despite there are already answers i still wrote a script out of fun and it still could be of help in some respects.
I wrote it for python3, so it is necessary to tweak some minor things to execute it on v2.x (e.g. the prints).
Anyways... the code creates a new folder relative to the location of a.py, creates and fills script b.py with code, executes b and displays b's results and errors.
The resulting path-structure is:
testFolder
|-testA
| |-a.py
|-testB
| |-b.py
The code is:
import os, sys, subprocess
def getRelativePathOfNewFolder(folderName):
return "../" + folderName + "/"
def getAbsolutePathOfNewFolder(folderName):
# create new folder with absolute path:
# get path of current script:
tmpVar = sys.argv[0]
# separate path from last slash and file name:
tmpVar = tmpVar[:sys.argv[0].rfind("/")]
# again to go one folder up in the path, but this time let the slash be:
tmpVar = tmpVar[:tmpVar.rfind("/")+1]
# append name of the folder to be created:
tmpVar += folderName + "/"
# for the crazy ones out there, you could also write this like this:
# tmpVar = sys.argv[0][:sys.argv[0].rfind("/", 0,
sys.argv[0].rfind("/")-1)+1] + folderName + "/"
return tmpVar
if __name__ == "__main__":
# do stuff here:
# ...
# create new folder:
bDir = getAbsolutePathOfNewFolder("testB")
os.makedirs(bDir, exist_ok=True) # makedirs can create new nested dirs at once. e.g: "./new1/new2/andSoOn"
# fill new folder with stuff here:
# ...
# create new python file in location bDir with code in it:
bFilePath = bDir + "b.py"
with open(bFilePath, "a") as toFill:
toFill.write("if __name__ == '__main__':")
toFill.write("\n")
toFill.write("\tprint('b.py was executed correctly!')")
toFill.write("\n")
toFill.write("\t#do other stuff")
# execute newly created python file
args = (
"python",
bFilePath
)
popen = subprocess.Popen(args, stdout=subprocess.PIPE)
# use next line if the a.py has to wait until the subprocess execution is finished (in this case b.py)
popen.wait()
# you can get b.py´s results with this:
resultOfSubProcess, errorsOfSubProcess = popen.communicate()
print(str(resultOfSubProcess)) # outputs: b'b.py was executed correctly!\r\n'
print(str(errorsOfSubProcess)) # outputs: None
# do other stuff
instead of creating a new code file and filling it with code you of course can simply copy an existing one as shown here:
How do I copy a file in python?
Your b.py script could take the name of the directory as a parameter. Access the first parameter passed to b.py with:
import sys
dirname = sys.argv[1]
Then iterate over the files in the named directory with:
import os
for filename in os.listdir(dirname):
process(filename)
Also see glob.glob and os.walk for more options processing files.
I'm planning to have project with following structure:
./runme.py
./my_modules/__init__.py
./my_modules/global_imports.py
./my_modules/user_defined_functions.py
Idea is to store important variables in global_imports.py from where they will be imported into runme.py using from my_modules.global_imports import * (I know it is a bad practice import modules this way, but I promise there will be just few variables with not colliding names)
Four questions:
Two of the variables contained inside global_imports.py should be SCRIPT_PATH and SCRIPT_DIR. I've tried SCRIPT_PATH = os.path.realpath(__file__) and SCRIPT_DIR = os.path.dirname(SCRIPT_PATH) but it returns path (directory) for global_imports.py not for runme.py. How can I get path (directory) of runme.py?
Inside global_imports.py I will probably import modules such as os and sys. I also need to import those modules inside runme.py. Is this considered as problem, when modules are imported first from another module and later from main script or vice versa?
Is it possible to import variables from global_imports.py into user_defined_functions.py? I consider this as bad practice I'm just curious.
Is there better approach to separate project into modules?
Addressing your questions in order:
In the first variable SCRIPT_DIR you are getting the full path of the file global_imports.py, which would be something like this:
SCRIPT_PATH = '/home/../../my_project/my_modules/global_imports.py'
now in order to get the directory of runme.py, we should consider another variable:
SCRIPT_PATH_DIR = os.path.dirname(SCRIPT_PATH)
this will give us the path
SCRIPT_PATH_DIR = '/home/../../my_project/my_modules/'
now to get to its parent directory which contains runme.py we can get like this:
SCRIPT_DIR = os.path.abspath(os.path.join(SCRIPT_PATH_DIR, os.pardir))
Now, SCRIPT_DIR gives the path of runme.py i.e.:
SCRIPT_DIR = '/home/../../my_project/'
As per our project structure, runme.py should only conrtain an import to the main module and then command to run the app. So, it shouldn't contain any other imports. Still if you need to use a module then explicitly import it in the runme.py as one of the Zen of Python says 'Explicit is better than implicit'
Yes, it is possible, you can do it like this:
from .global_imports import variable_name
but in general you should have a separate config.py or settings.py file in '/../my_project/' directory which should contain all the settings and variables which you may need to use anywhere in the project.
This approach is good enough as far as I've seen. Your main project directory contains runme.py and other modules which are used inside the project. 'my_modules' is one of the modules I think. You can have more of such modules inside the project directory. A better approach is to have settings and configurations inside one of the modules(such as my_modules) only and to have other modules for the functionality.
Hope this helps, please comment if something is unclear.