Python use of ~ versus python-path methods - python

Can someone explain what is the difference in using
filepath = 'C:/Users/your_name/Documents/subfolder/'
to
filepath = '~/Documents/subfolder/'
?
I am writing code that I'd like to be re-useable on a second computer with an identical folder structure, but different location of the Documents folder.
The following pages on SO appear to give indications on how to do it differently, but I'm wondering about how much better these are versus the method shown above. From an SO/Google point of view, searching for "~" has not lead to any useful answers either.
How to get an absolute file path in Python
Relative imports for the billionth time
Find current directory and file's directory

If the path is the same past the user name, then ~ would be best, you just have to make sure that you expand it before passing to routines that expect abs path.
In general ~ is just an environment variable to the user home directory that is currently in session, and to expand it just call something like:
os.path.expandvars("~\your\tilde\path")

Related

Best Practice for ini file locations specifically for custom scripts that you want to run as a package

I am new to python and been assigned a small project at work. I am writing a simple email script to be used for many other python jobs for various paths etc (all on same server). The email functionality is working that is not the issue. the issue lies in detecting the .ini and the best approach to achieving the goal.
For the sake of this post. these are the following Directory Locations and names
C:/Python_Custom_Packages/Classes/Email.py #Email Script
C:/Python_Custom_Packages/Conf/Email_config.ini #Email Config
C:/Example_Project/Test_Job_Email.py #Test Job that will call the script
to get email.py to read Email_config.ini im using the following:
config = configparser.ConfigParser()
config.read('./config/email_details.ini')
as well as setting the working directory to C:/Python_Custom_Packages so that the root (./) is on the nearest common directory. Now I have done this as this is how the other jobs do it. I have been a developer for a few years mainly in java so have programming experience, and something about this setup doesn't sit right with me so ive done some googling to find the 'best practice' way of dealing with ini files (and other configs). Most examples state just use the following:
config = configparser.ConfigParser()
config.read('email_details.ini')
Which to me means that the ini sits on the same directory path as the script.
My first question is, Is this best practice? (having the config and the script that utilises it in the same DIR) or is what the team do currently with a config folder in a different directory and pointing the workspace to the closest shared path as root and adding the dir ('./....Email_config.ini) a good idea. To me this will cause issues if migrating/moving the jobs etc.
The issue i am hitting other than the above query is that if i execute Test_Job_Email.py it uses its path as root meaning when i call Email.py inside this script it cant find the .ini
So my second question is. What is the best approach for ensuring the email_details.ini is picked up by the Email.py when its being executed from a total different script in a different path?
Note:
C:/Python_Custom_Packages/Classes/
C:/Python_Custom_Packages/Conf/
are in PYTHONPATH Environmental Variable already.
Sorry if any of this is confusingly written as I say I'm starting out with Python and rarely post on Stack Overflow but I couldn't find any definitive answers on best approach.
So I still not 100% sure on the best practices but for ease it makes sense that the .ini be in the same path as the .py that calls it. that way you can add something like
script_path = os.path.dirname(os.path.realpath(__file__))
before the read and that will get the path without the file of the current script i.e. the Email script regardless of where the script was executed. From there you can add it to the conf.read method as part of the path
conf.read(script_path + '/email_details.ini')
This could be a cheap workaround but its what i will be using for now.

addressing to the upper level directory using relative path in python

I have a python file in "mainDirectory/subdirectory/myCode.py" and in my code, I want to refer to a .csv file in "mainDirectory/some_data.csv" I don't want to use an absolute path since I run the code in different operating systems and using absolute path may make trouble. so in short, is there a way to refer to the upper directory of the current directory using a relative path in python?
I know how to refer to the subdirectories of the current directory using a relative path. but I'm looking for addressing to the upper-level directories using the relative path. I don't want to use absolute bath since the file is going to be run in different folders in different operating systems.
Update:
I found one method here (it is not based on the relative path but it does the job):
the key is using "." for importing upper hand directories.
for example, one can access the same layer directory by using one dot. for accessing the one-layer higher, one can use two dots. in the above case, to access the "subdirectory" one can put this line in "myCode.py"
from .subdirectory import something
to access the "mainDirectory:
from ..mainDirectory import something
If you look at the Python documentation, you will find that the language is not designed to be used like that. Instead you'll want to try to refactor your directories to put myCode.py in a parent level directory like mainDirectory.
mainDirectory
│
└───myCode.py
│
└───subdirectory
│
│ some_data.csv
You are facing an issue with the nature of the PYTHONPATH. There is a great article on dealing with PYTHONPATH here:
https://chrisyeh96.github.io/2017/08/08/definitive-guide-python-imports.html
It can be challenging if you're new to the language (or if you've been using it for years!) to navigate Python import statements. Reading up on PEP8 conventions for importing, and making sure your project conforms to standard language conventions can save you a lot of time, and a lot of messy git commits when you need to refactor your directory tree down the line.
You can always derive absolute path at run time using API provided by os package.
import os
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'relative/path/to/file/you/want')
Complete answer was always a click away. Relative paths in Python
There are several tools in os.path that you can use:
from os.path import dirname
upper = dirname(dirname(__filepath__))
Here is alink to a more comprehensive answer on accessing upper directories:
How do I get the parent directory in Python?

Overriding Python pdb to show relative paths instead of absolute?

When stepping ('s') through a Python based plugin with pdb the absolute path of the current file is shown. In this particular case the absolute path is really long, so I would like pdb to show either a relative or partial path instead.
Any pointers or suggestions on where I should look? I thought maybe I didn't have the current directory set correctly, but it appears to always wants to show the absolute path. Do I need to subclass pdb? Thanks!

Python 2.7, OS portable way to determine permissions / contents of a puser entered path

Disclaimer: i have searched genericly (Google) and here for some information on this topic, but most of the answers are either very old, or don't seem to quite make sense to me, so I appologize in advance if this seems simple or uninformed.
Problem: My app accepts command line input which may be either a path or a file and I need to determine several things about it.
Is it a path or file,
Is it relative or absolute
Is it readable and / or writable (need read and write test results)(ignoring the possibility of a race situation)
One caveat, while a
try:
file=open(filename,'w')
except OSError as e:
{miscellaneous error handling code here}
would obviously tell me if the parameter (filename in above example) existed/ was writable etc. I do not understand enough about exception codes to know how to interpret the result of exception. It also wouldn't provide the relative/absolute information.
Assuming that there is no one method to do this then I would need to know three things:
How to determine relative / absolute
Is it pointing to a file or a directory
can the EUID of the program read the location, and same for write.
I am seeking to learn from the information I gather here, and I am new to python and have bit off a significant project. I have mastered all of it except this part. Any help would be appreciated. (Pointers to good help sites welcome!)(except docs.python.org that ones bookmarked already ;-) )
Here are your answers. The documentations specifies that the following works for both windows and linux.
How to determine relative / absolute
os.path.isabs returns True if the path is absolute, False if not.
Is it pointing to a file or a directory
Similarly Use os.path.isdir to find out if the path is directory or not.
YOu can use os.path.isfile(path) to find out if the path is file or not.
can the EUID of the program read the location, and same for write.
You can use os.access(path, mode) to know if operations requiring permissions specified by mode are possible on file specified by path or not.
P.S. This will not work for files being accessed over the network. You can use os.stat This is the right way to get all the information. However it is a more costly call and hence, you should try and get all the info in one call . To interpret the results, you can use the stat module
Seems like os.path would take care of most of these needs with isabs(), isfile(), isdir(), etc. Off the top of my head I can't think of a function that will give a simple True/False for read/write access for a given file, but one way to tackle this might be to use the os module's stat function and have a look at the file's permissions and owner.

Determining a file's path name from different working directories in python

I have a python module that is shared among several of my projects (the projects each have a different working directory). One of the functions in this shared module, executes a script using os.spawn. The problem is, I'm not sure what pathname to give to os.spawn since I don't know what the current working directory will be when the function is called. How can I reference the file in a way that any caller can find it? Thanks!
So I just learned about the __file__ variable, which will provide a solution to my problem. I can use file to get a pathname which will be constant among all projects, and use that to reference the script I need to call, since the script will always be in the same location relative to __file__. However, I'm open to other/better methods if anyone has them.
Put it in a well known directory (/usr/lib/yourproject/ or ~/lib or something similar), or have it in a well known relative path based on the location of your source files that are using it.
The following piece of code will find the location of the calling module, which makes sense from a programmer's point of view:
## some magic to allow paths relative to calling module
if path.startswith('/'):
self.path = path
else:
frame = sys._getframe(1)
base = os.path.dirname(frame.f_globals['__file__'])
self.path = os.path.join(base, path)
I.e. if your project lives in /home/foo/project, and you want to reference a script 'myscript' in scripts/, you can simply pass 'scripts/myscript'. The snippet will figure out the caller is in /home/foo/project and the entire path should be /home/foo/projects/scripts/myscript.
Alternatively, you can always require the programmer to specify a full path, and check using os.path.exists if it exists.
You might find the materials in this PyCon 2010 presentation on cross platform application development and distribution useful. One of the problems they solve is finding data files consistently across platforms and for installed vs development checkouts of the code.

Categories

Resources