Loading image from the same folder as project in Python, Pygame - python

I am trying to make an adventure game in python, and I need to know how to import image, which is in the same folder as the code with pygame. How to do it? I have tried
Character = pygame.image.load('Resources/MainCharFront.png')
but I'm getting an error:
pygame.error: Couldn't open Resources/MainChar_Front.png
I really need it to be in the same folder, because I am often switching devices and my file system is always different.

If you have structured your code as a Python package (which you should), you can use the pkg_resources module to access resource files like images, etc, that are part of your project.
For example, if I have the following layout:
./mypackage/__init__.py
./mypackage/main.py
./mypackage/images/character.jpg
I can write in mypackage/main.py:
import pygame
import pkg_resources
Character = pygame.image.load(
pkg_resources.resource_filename('mypackage', 'images/character.jpg'))
You can see this in action below:
>>> import mypackage.main
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
>>> mypackage.main.Character
<Surface(359x359x24 SW)>
>>>

In your comment you say that the image is in the same directory as your code, however the path you are showing implies that you are trying to load it from a sub-directory called Resources:
Character = pygame.image.load('Resources/MainCharFront.png')
So you can likely fix your problem by removing that from the path ans just using :
Character = pygame.image.load('MainCharFront.png')
However that is not the approach that I would recommend. You are better off to keep the resources in a separate sub-directory like Resources to try and keep thing organized. You said that you want to use a flat structure with everything in one folder because you move the game around between different systems with different file systems. I will assume from that that you are having issues with the path separator on these different systems. That is fairly easy to handle though.
#larsks has suggested one way that is a good approach. You do not have to go quite that far though to still be able to keep structure in your resources.
The easy way to deal with different path separators on different file systems is to use os.path.join() to link your path components with the file system appropriate separator, like this:
Character = pygame.image.load(os.path.join('Resources', 'MainCharFront.png'))
This will allow you to move between Windows, Linux, etc. without having to flatten your structure. The os.path.join() can take multiple path components as arguments, not just 2, so you can have as much hierarchy as you need. Just break up the path string into separate strings where the slashes would be. like this:
os.path.join('Resources', 'images', 'MainCharFront.png')
You can find the docs for the os.path.join() command here
Just to be overly clear, the os.path.join() method is not the same as the standard string join() method (which joins strings using the separator you tell it to). The os.path.join() method determines the separator for you based on the system it is being run on.

Related

Get current directory - 'os' and 'subprocess' library are banned

I'm stuck in a rock and a hard place.
I have been given a very restricted Python 2/3 environment where both os and subprocess library are banned. I am testing file upload functionality using Python selenium; creating the file is straight forward, however, when I use the method driver.send_keys(xyz), the send_keys is expecting a full path rather than just a file name. Any suggestions on how to work around this?
I do not know if it would work in very restricted Python 2/3, but you might try following:
create empty file, let say empty.py like so:
with open('empty.py','w') as f:
pass
then do:
import empty
and:
pathtofile = empty.__file__
print(pathtofile)
This exploits fact empty text file is legal python module (remember to set name which is not use) and that generally import machinery set __file__ dunder, though this is not required, so you might end with None. Keep in mind that if it is not None then it is path to said file (empty.py) so you would need to further process it to get catalog itself.
with no way of using the os module it would seem you're SOL unless the placement of your script is static (i.e. you define the current working dirrectory as a constant) and you then handle all the path/string operations within that directory yourself.
you won't be able to explore what's in the directory but you can keep tabs on any files you create and just store the paths yourself, it will be tedious but there's no reason why it shouldn't work.
you won't be able to delete files i don't think but you should be able to clear their contents

using libpath in python to map a file from one OS to another and slashes are wrong

I have a python script that works with another program via API. I need to send the program the directory of a file. The problem I am having is I need to account for the possible difference in paths if the script and program are on different machines and/or OSes. If the script and program are on the same machine it's not an issue. But if they are on different machines, the machine with the script will have a networked path:
script (mapped network drive):
Z:\files\file.txt
program:
/mnt/user/disk1/files/file.txt
So the Z drive points to the mapped networked drive the program has access to.
import pathlib
location = 'Z:\\files\\file.txt'
map_source = 'Z:\\'
map_destination = '/mnt/user/disk1/'
newpath=(location.replace(map_source, ''))
print(pathlib.PurePath(map_destination, newpath))
So if I give the script Z:\files\file.txt as input, it should remove Z:\ from location and replace it with /mnt/user/disk1/ and return /mnt/user/disk1/files/file.txt. The problem is it is returning with the wrong slashes:
\mnt\user\disk1\files\file.txt
How can I get it to determine what the correct slashes should be? My understanding is PurePath will do the correct slash depending on the OS the script is run but I might be running this on a Windows machine and sending the path to a Linux machine. I realize I can probably do this manually with a regex or something but is there some way with libpath or some already existing module? I can't just tell it what to convert it I don't know what system the destination will be. I suppose it would have to parse it and figure it out.
I realize I can probably do this manually with a regex or something but is there some way with libpath or some already existing module?
Use the specific PurePath subclasses? PurePath dispatches between pathlib.PurePosixPath and pathlib.PureWindowsPath internally based on the system it's being run on, but you can use these subclasses directly if you know what's what.
Not that it'll help much, as I don't think there's any real "bridge" between the two: PurePath (and its subclasses) can take path segments as input, but doesn't return path segments.
Also note that your windows paths are broken, \ is an escape character in non-raw python strings so e.g. \f is going to be interpreted as ASCII Formfeed (FF).

Analyzing impact of adding an import-path

I have the following directory structure:
TestFolder:
test.py
CommonFolder:
common.py
And in test.py, I need to import common.py.
In order to do that, in test.py I add the path of CommonFolder to the system paths.
Here is what I started off with:
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'CommonFolder'))
Then I figured that / is a valid separator in pretty much every OS, so I changed it to:
sys.path.append(os.path.dirname(os.path.dirname(__file__)) + '/CommonFolder')
Then I figured that .. is also a valid syntax in pretty much every OS, so I changed it to:
sys.path.append(os.path.dirname(__file__) + '/../CommonFolder')
My questions:
Are my assumptions above correct, and will the code run correctly on every OS?
In my last change, I essentially add a slightly longer path to the system paths. More precisely - FullPath/TestFolder/../CommonFolder instead of FullPath/CommonFolder. Is the any runtime impact to this? I suppose that every import statement might be executed slightly slower, but even if so, that would be minor. Is there any good reason not to do it this way?
If you're writing code to span multiple Operating Systems it's best not to try to construct the paths yourself. Between Linux and Windows you immediately run into the forward vs backwards slash issue, just as an example.
I'd recommend looking into the Python pathlib library. It handles generating paths for different operating systems.
https://docs.python.org/3/library/pathlib.html
This is a great blog about this subject and how to use the library:
https://medium.com/#ageitgey/python-3-quick-tip-the-easy-way-to-deal-with-file-paths-on-windows-mac-and-linux-11a072b58d5f
UPDATE:
Updating this with a more specific answer.
Regarding the directory paths, as long as you're not building the paths yourself (using a utility such as pathlib) the paths you've created should be fine. Linux, Mac, and Windows all support relative paths (both mac and linux are Unix ~based of course).
As for whether it's efficient, unless you're frequently dynamically loading or reloading your source files (which is not common) most files are loaded into memory before the code is run, so there would be no performance impact on setting up the file paths in this way.
Use os.path.join() for OS independent path separator instead of /
Example: os.path.join(os.path.dirname(__file__),"..","CommonFolder")
Or instead you can make CommonFolder as python package by just placing a empty file by name __init__.py inside CommonFolder. After that you can simply import common in test.py as:-
from CommonFolder import common

Is there a way to combine a python project codebase that spans across different files into one file?

The reason I want to this is I want to use the tool pyobfuscate to obfuscate my python code. Butpyobfuscate can only obfuscate one file.
I've answered your direct question separately, but let me offer a different solution to what I suspect you're actually trying to do:
Instead of shipping obfuscated source, just ship bytecode files. These are the .pyc files that get created, cached, and used automatically, but you can also create them manually by just using the compileall module in the standard library.
A .pyc file with its .py file missing can be imported just fine. It's not human-readable as-is. It can of course be decompiled into Python source, but the result is… basically the same result you get from running an obfuscater on the original source. So, it's slightly better than what you're trying to do, and a whole lot easier.
You can't compile your top-level script this way, but that's easy to work around. Just write a one-liner wrapper script that does nothing but import the real top-level script. If you have if __name__ == '__main__': code in there, you'll also need to move that to a function, and the wrapper becomes a two-liner that imports the module and calls the function… but that's as hard as it gets.) Alternatively, you could run pyobfuscator on just the top-level script, but really, there's no reason to do that.
In fact, many of the packager tools can optionally do all of this work for you automatically, except for writing the trivial top-level wrapper. For example, a default py2app build will stick compiled versions of your own modules, along with stdlib and site-packages modules you depend on, into a pythonXY.zip file in the app bundle, and set up the embedded interpreter to use that zipfile as its stdlib.
There are a definitely ways to turn a tree of modules into a single module. But it's not going to be trivial. The simplest thing I can think of is this:
First, you need a list of modules. This is easy to gather with the find command or a simple Python script that does an os.walk.
Then you need to use grep or Python re to get all of the import statements in each file, and use that to topologically sort the modules. If you only do absolute flat import foo statements at the top level, this is a trivial regex. If you also do absolute package imports, or from foo import bar (or from foo import *), or import at other levels, it's not much trickier. Relative package imports are a bit harder, but not that big of a deal. Of course if you do any dynamic importing, use the imp module, install import hooks, etc., you're out of luck here, but hopefully you don't.
Next you need to replace the actual import statements. With the same assumptions as above, this can be done with a simple sed or re.sub, something like import\s+(\w+) with \1 = sys.modules['\1'].
Now, for the hard part: you need to transform each module into something that creates an equivalent module object dynamically. This is the hard part. I think what you want to do is to escape the entire module code so that it can put into a triple-quoted string, then do this:
import types
mod_globals = {}
exec('''
# escaped version of original module source goes here
''', mod_globals)
mod = types.ModuleType(module_name)
mod.__dict__.update(mod_globals)
sys.modules[module_name] = mod
Now just concatenate all of those transformed modules together. The result will be almost equivalent to your original code, except that it's doing the equivalent of import foo; del foo for all of your modules (in dependency order) right at the start, so the startup time could be a little slower.
You can make a tool that:
Reads through your source files and puts all identifiers in a set.
Subtracts all identifiers from recursively searched standard- and third party modules from that set (modules, classes, functions, attributes, parameters).
Subtracts some explicitly excluded identifiers from that list as well, as they may be used in getattr/setattr/exec/eval
Replaces the remaining identifiers by gibberish
Or you can use this tool I wrote that does exactly that.
To obfuscate multiple files, use it as follows:
For safety, backup your source code and valuable data to an off-line medium.
Put a copy of opy_config.txt in the top directory of your project.
Adapt it to your needs according to the remarks in opy_config.txt.
This file only contains plain Python and is exec’ed, so you can do anything clever in it.
Open a command window, go to the top directory of your project and run opy.py from there.
If the top directory of your project is e.g. ../work/project1 then the obfuscation result will be in ../work/project1_opy.
Further adapt opy_config.txt until you’re satisfied with the result.
Type ‘opy ?’ or ‘python opy.py ?’ (without the quotes) on the command line to display a help text.
I think you can try using the find command with -exec option.
you can execute all python scripts in a directory with the following command.
find . -name "*.py" -exec python {} ';'
Wish this helps.
EDIT:
OH sorry I overlooked that if you obfuscate files seperately they may not run properly, because it renames function names to different names in different files.

viewing files in python?

I am creating a sort of "Command line" in Python. I already added a few functions, such as changing login/password, executing, etc., But is it possible to browse files in the directory that the main file is in with a command/module, or will I have to make the module myself and use the import command? Same thing with changing directories to view, too.
Browsing files is as easy as using the standard os module. If you want to do something with those files, that's entirely different.
import os
all_files = os.listdir('.') # gets all files in current directory
To change directories you can issue os.chdir('path/to/change/to'). In fact there are plenty of useful functions found in the os module that facilitate the things you're asking about. Making them pretty and user-friendly, however, is up to you!
I'd like to see someone write a a semantic file-browser, i.e. one that auto-generates tags for files according to their input and then allows views and searching accordingly.
Think about it... take an MP3, lookup the lyrics, run it through Zemanta, bam! a PDF file, a OpenOffice file, etc., that'd be pretty kick-butt! probably fairly intensive too, but it'd be pretty dang cool!
Cheers,
-C

Categories

Resources