Get resource from python resource root - python

I am using the PyCharm IDE. I marked a folder as a resource root and wanted to get a file from its directory and was wondering the appropriate way to do so.
In Java, you can use getClass().getResource("/resourceName.extension")
Is there some way to get a path from a python in said manner?

Based off what you have said it sounds like you just need to include the directory of the file with a simple include statement.
for instance, if your files are set up as such:
c:program\main
c:program\resources
then you can just do a simple
import resources
However, you could run into coupling issues if you have any sub-packages. Solving the coupling issue involving resources has been gone over in more detail in another thread I have linked below.
Managing resources in a Python project

What I want can be accomplished by this answer.
I used the code as follows:
os.path.join(os.path.dirname(__file__), '../audio/music.wav'

Related

Best Practice for ini file locations specifically for custom scripts that you want to run as a package

I am new to python and been assigned a small project at work. I am writing a simple email script to be used for many other python jobs for various paths etc (all on same server). The email functionality is working that is not the issue. the issue lies in detecting the .ini and the best approach to achieving the goal.
For the sake of this post. these are the following Directory Locations and names
C:/Python_Custom_Packages/Classes/Email.py #Email Script
C:/Python_Custom_Packages/Conf/Email_config.ini #Email Config
C:/Example_Project/Test_Job_Email.py #Test Job that will call the script
to get email.py to read Email_config.ini im using the following:
config = configparser.ConfigParser()
config.read('./config/email_details.ini')
as well as setting the working directory to C:/Python_Custom_Packages so that the root (./) is on the nearest common directory. Now I have done this as this is how the other jobs do it. I have been a developer for a few years mainly in java so have programming experience, and something about this setup doesn't sit right with me so ive done some googling to find the 'best practice' way of dealing with ini files (and other configs). Most examples state just use the following:
config = configparser.ConfigParser()
config.read('email_details.ini')
Which to me means that the ini sits on the same directory path as the script.
My first question is, Is this best practice? (having the config and the script that utilises it in the same DIR) or is what the team do currently with a config folder in a different directory and pointing the workspace to the closest shared path as root and adding the dir ('./....Email_config.ini) a good idea. To me this will cause issues if migrating/moving the jobs etc.
The issue i am hitting other than the above query is that if i execute Test_Job_Email.py it uses its path as root meaning when i call Email.py inside this script it cant find the .ini
So my second question is. What is the best approach for ensuring the email_details.ini is picked up by the Email.py when its being executed from a total different script in a different path?
Note:
C:/Python_Custom_Packages/Classes/
C:/Python_Custom_Packages/Conf/
are in PYTHONPATH Environmental Variable already.
Sorry if any of this is confusingly written as I say I'm starting out with Python and rarely post on Stack Overflow but I couldn't find any definitive answers on best approach.
So I still not 100% sure on the best practices but for ease it makes sense that the .ini be in the same path as the .py that calls it. that way you can add something like
script_path = os.path.dirname(os.path.realpath(__file__))
before the read and that will get the path without the file of the current script i.e. the Email script regardless of where the script was executed. From there you can add it to the conf.read method as part of the path
conf.read(script_path + '/email_details.ini')
This could be a cheap workaround but its what i will be using for now.

How should I handle having a local dependency shared by two projects?

Here's the theoretical scenario:
Project_A and Project_B are two independent processes for the client. Both projects rely on a custom module called Module_X.
I have the following folder structure currently :
project_a
project_a/module_x
project_b
project_b/module_x
This functionally works but has the drawback of if I do a change in project_b/module_x I then have to manually copy over the updated contents of module_x to project_a/module_x so that they are congruent and I get consistent results between the two projects.
What is a better way in which I can handle this?
module_x is not able to be put up on pypi as it contains logic for sensitive information resources.
This will add that path to paths where python searches for modules
import sys
sys.path.insert(1, '/home/user/projectb')

Python: Create virtual import path

Is there any way to create a virtual import path in Python?
My directory structure is like this:
/
native
scripts
some.py
another.py
[Other unrelated dirs]
The root is the directory from where the program is executed. Atm I add native/scripts/ to the search path so I can do import some, another instead of from native.scripts import some, another, but I'd like to be able to do it like this:
from native import some
import native.another
Is there any way to achieve this?
Related questions:
Making a virtual package available via sys.modules
Why not move some.py and another.py out into the native directory so that everything Just Works and so that people returning to the source code later won't be confused about why things are and aren't importable? :)
Update:
Thanks for your comments; they have usefully clarified the problem! In your case, I generally put functions and classes that I might want to import inside, say, native.some where I can easily get to them. But then I get the script code, and only the script code — only the thin shim that interprets arguments and starts everything running by passing those to a main() or go() function as parameters — and put that inside of a scripts directory. That keeps external-interface code cleanly separate from code that you might want to import, and means you don't have to try to fool Python into having modules several places at once.
In /native/__init__.py, include:
from scripts import some, another

Combining Sphinx documentation from multiple subprojects: Handling indices, syncing configuration, etc

We have a multi-module project documented with the (excellent) Sphinx. Our setup is not unlike one described on the mailing list. Overall this works great! But we have a few questions about doing so:
The submodule tables of contents will include index links. At best these will link to the wrong indices. (At worst this seems to trigger a bug in Sphinx, but I'm using the devel version so that's reasonable). Is there a way of generating the index links only for the topmost toctree?
Are there best practices for keeping the Sphinx configuration in sync between multiple projects? I could imagine hacking something together around from common_config import *, but curious about other approaches.
While we're at it, the question raised in the mailing list post (alternative to symlinking subproject docs?) was never answered. It's not important to me, but it may be important to other readers.
I'm not sure what you mean by this. Your project's index appears to be just fine. Could you clarify on this, please?
As far as I've seen, from common_config import * is the best approach for keeping configuration in sync.
I think the best way to do this is something like the following directory structure:
main-project/
conf.py
documentation.rst
sub-project-1/
conf.py - imports from main-project/conf.py
documentation.rst
sub-project-2/
conf.py - likewise, imports from main-project/conf.py
documentation.rst
Then, to just package sub-project-1 or sub-project-2, use this UNIX command:
sphinx-build main-project/ <output directory> <paths to sub-project docs you want to add>
That way, not only will the main project's documentation get built, the sub-project documentation you want to add will be added as well.
To package main-project:
sphinx-build main-project/ <output directory>
I'm pretty sure this scheme will work, but I've yet to test it out myself.
Hope this helps!
Regarding point 2 (including common configuration), I'm using:
In Python 2:
execfile (os.path.abspath("../../common/conf.py"))
In Python 3:
exec (open('../../common/conf.py').read())
Note that, unlike the directory structure presented by #DangerOnTheRanger, I prefer to keep a separate directory for common documentation, which is why common appears in the path above.
My common/conf.py file is a normal Sphynx file. Then, each of the specific documentation configuration includes that common file and overrides values as necessary, like in this example:
import sys
import os
execfile (os.path.abspath("../../common/conf.py"))
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# If true, links to the reST sources are added to the pages.
html_copy_source = False
html_show_sourcelink = False

viewing files in python?

I am creating a sort of "Command line" in Python. I already added a few functions, such as changing login/password, executing, etc., But is it possible to browse files in the directory that the main file is in with a command/module, or will I have to make the module myself and use the import command? Same thing with changing directories to view, too.
Browsing files is as easy as using the standard os module. If you want to do something with those files, that's entirely different.
import os
all_files = os.listdir('.') # gets all files in current directory
To change directories you can issue os.chdir('path/to/change/to'). In fact there are plenty of useful functions found in the os module that facilitate the things you're asking about. Making them pretty and user-friendly, however, is up to you!
I'd like to see someone write a a semantic file-browser, i.e. one that auto-generates tags for files according to their input and then allows views and searching accordingly.
Think about it... take an MP3, lookup the lyrics, run it through Zemanta, bam! a PDF file, a OpenOffice file, etc., that'd be pretty kick-butt! probably fairly intensive too, but it'd be pretty dang cool!
Cheers,
-C

Categories

Resources