I'm stuck in a rock and a hard place.
I have been given a very restricted Python 2/3 environment where both os and subprocess library are banned. I am testing file upload functionality using Python selenium; creating the file is straight forward, however, when I use the method driver.send_keys(xyz), the send_keys is expecting a full path rather than just a file name. Any suggestions on how to work around this?
I do not know if it would work in very restricted Python 2/3, but you might try following:
create empty file, let say empty.py like so:
with open('empty.py','w') as f:
pass
then do:
import empty
and:
pathtofile = empty.__file__
print(pathtofile)
This exploits fact empty text file is legal python module (remember to set name which is not use) and that generally import machinery set __file__ dunder, though this is not required, so you might end with None. Keep in mind that if it is not None then it is path to said file (empty.py) so you would need to further process it to get catalog itself.
with no way of using the os module it would seem you're SOL unless the placement of your script is static (i.e. you define the current working dirrectory as a constant) and you then handle all the path/string operations within that directory yourself.
you won't be able to explore what's in the directory but you can keep tabs on any files you create and just store the paths yourself, it will be tedious but there's no reason why it shouldn't work.
you won't be able to delete files i don't think but you should be able to clear their contents
Related
I am trying to make an adventure game in python, and I need to know how to import image, which is in the same folder as the code with pygame. How to do it? I have tried
Character = pygame.image.load('Resources/MainCharFront.png')
but I'm getting an error:
pygame.error: Couldn't open Resources/MainChar_Front.png
I really need it to be in the same folder, because I am often switching devices and my file system is always different.
If you have structured your code as a Python package (which you should), you can use the pkg_resources module to access resource files like images, etc, that are part of your project.
For example, if I have the following layout:
./mypackage/__init__.py
./mypackage/main.py
./mypackage/images/character.jpg
I can write in mypackage/main.py:
import pygame
import pkg_resources
Character = pygame.image.load(
pkg_resources.resource_filename('mypackage', 'images/character.jpg'))
You can see this in action below:
>>> import mypackage.main
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
>>> mypackage.main.Character
<Surface(359x359x24 SW)>
>>>
In your comment you say that the image is in the same directory as your code, however the path you are showing implies that you are trying to load it from a sub-directory called Resources:
Character = pygame.image.load('Resources/MainCharFront.png')
So you can likely fix your problem by removing that from the path ans just using :
Character = pygame.image.load('MainCharFront.png')
However that is not the approach that I would recommend. You are better off to keep the resources in a separate sub-directory like Resources to try and keep thing organized. You said that you want to use a flat structure with everything in one folder because you move the game around between different systems with different file systems. I will assume from that that you are having issues with the path separator on these different systems. That is fairly easy to handle though.
#larsks has suggested one way that is a good approach. You do not have to go quite that far though to still be able to keep structure in your resources.
The easy way to deal with different path separators on different file systems is to use os.path.join() to link your path components with the file system appropriate separator, like this:
Character = pygame.image.load(os.path.join('Resources', 'MainCharFront.png'))
This will allow you to move between Windows, Linux, etc. without having to flatten your structure. The os.path.join() can take multiple path components as arguments, not just 2, so you can have as much hierarchy as you need. Just break up the path string into separate strings where the slashes would be. like this:
os.path.join('Resources', 'images', 'MainCharFront.png')
You can find the docs for the os.path.join() command here
Just to be overly clear, the os.path.join() method is not the same as the standard string join() method (which joins strings using the separator you tell it to). The os.path.join() method determines the separator for you based on the system it is being run on.
I have a variable in my python script that holds the path to a file as a string. Is there a way, through a python module, etc, to keep the file path updated if the file were to be moved to another destination?
Short answer: no, not with a string.
Long answer: If you want to use only a string to record the location of this file, you're probably out of luck unless you use other tools to find the location of the file, or record whenever it moves - I don't know what those tools are.
You don't give a lot of info in your question about what you want this variable for; as #DeepSpace says in his comment, if you're trying to make this String follow the file between different runs of this program, then you'd be better off making an argument for the script. If, however, you expect the file to move sometime during the execution of your program, you might be able to use a file object to keep track of it. Or, rather - instead of keeping the filepath in memory, keep a file descriptor in memory instead (the kind you would generate by using the open() function, and just never close that file until the program terminates. You can use seek to return to the start of the file if you needed to read it multiple times. Problems with this include that it's not memory-safe, and it's absolutely not a best practice.
TL;DR
Your best bet is probably to go with a solution like #DeepSpace mentioned where you could go and call your script with a parameter in command-line which will then force the user to input a valid path.
This is actually a really good question, but unfortunately purely Pythonly speaking, this is impossible.
No python module will be able to dynamically linked a variable to a path on the file-system. You will need an external script or routine which will update any kind of data structure that will hold the path value.
Even then, the name of the file could change, but not it's location. here is what I mean by that.
Let's say you were to wrapped that file in a folder only containing that specific file. Since you now know that it's location is fixed (theoretically speaking), you can have another python script/routine that will read the filename and store it in a textfile. Your other script could go and get that file name (considering your routine would sync that file on a regular basis). But, as soon as the location of the file changes, how can you possibly know where it is now. It has to be manually hard coded somewhere to have something close to the behavior your expecting.
Note that my example is not in any way a solution to go-to for your problem. I'm actually trying to underline the shortcomings of such a feature.
The reason I want to this is I want to use the tool pyobfuscate to obfuscate my python code. Butpyobfuscate can only obfuscate one file.
I've answered your direct question separately, but let me offer a different solution to what I suspect you're actually trying to do:
Instead of shipping obfuscated source, just ship bytecode files. These are the .pyc files that get created, cached, and used automatically, but you can also create them manually by just using the compileall module in the standard library.
A .pyc file with its .py file missing can be imported just fine. It's not human-readable as-is. It can of course be decompiled into Python source, but the result is… basically the same result you get from running an obfuscater on the original source. So, it's slightly better than what you're trying to do, and a whole lot easier.
You can't compile your top-level script this way, but that's easy to work around. Just write a one-liner wrapper script that does nothing but import the real top-level script. If you have if __name__ == '__main__': code in there, you'll also need to move that to a function, and the wrapper becomes a two-liner that imports the module and calls the function… but that's as hard as it gets.) Alternatively, you could run pyobfuscator on just the top-level script, but really, there's no reason to do that.
In fact, many of the packager tools can optionally do all of this work for you automatically, except for writing the trivial top-level wrapper. For example, a default py2app build will stick compiled versions of your own modules, along with stdlib and site-packages modules you depend on, into a pythonXY.zip file in the app bundle, and set up the embedded interpreter to use that zipfile as its stdlib.
There are a definitely ways to turn a tree of modules into a single module. But it's not going to be trivial. The simplest thing I can think of is this:
First, you need a list of modules. This is easy to gather with the find command or a simple Python script that does an os.walk.
Then you need to use grep or Python re to get all of the import statements in each file, and use that to topologically sort the modules. If you only do absolute flat import foo statements at the top level, this is a trivial regex. If you also do absolute package imports, or from foo import bar (or from foo import *), or import at other levels, it's not much trickier. Relative package imports are a bit harder, but not that big of a deal. Of course if you do any dynamic importing, use the imp module, install import hooks, etc., you're out of luck here, but hopefully you don't.
Next you need to replace the actual import statements. With the same assumptions as above, this can be done with a simple sed or re.sub, something like import\s+(\w+) with \1 = sys.modules['\1'].
Now, for the hard part: you need to transform each module into something that creates an equivalent module object dynamically. This is the hard part. I think what you want to do is to escape the entire module code so that it can put into a triple-quoted string, then do this:
import types
mod_globals = {}
exec('''
# escaped version of original module source goes here
''', mod_globals)
mod = types.ModuleType(module_name)
mod.__dict__.update(mod_globals)
sys.modules[module_name] = mod
Now just concatenate all of those transformed modules together. The result will be almost equivalent to your original code, except that it's doing the equivalent of import foo; del foo for all of your modules (in dependency order) right at the start, so the startup time could be a little slower.
You can make a tool that:
Reads through your source files and puts all identifiers in a set.
Subtracts all identifiers from recursively searched standard- and third party modules from that set (modules, classes, functions, attributes, parameters).
Subtracts some explicitly excluded identifiers from that list as well, as they may be used in getattr/setattr/exec/eval
Replaces the remaining identifiers by gibberish
Or you can use this tool I wrote that does exactly that.
To obfuscate multiple files, use it as follows:
For safety, backup your source code and valuable data to an off-line medium.
Put a copy of opy_config.txt in the top directory of your project.
Adapt it to your needs according to the remarks in opy_config.txt.
This file only contains plain Python and is exec’ed, so you can do anything clever in it.
Open a command window, go to the top directory of your project and run opy.py from there.
If the top directory of your project is e.g. ../work/project1 then the obfuscation result will be in ../work/project1_opy.
Further adapt opy_config.txt until you’re satisfied with the result.
Type ‘opy ?’ or ‘python opy.py ?’ (without the quotes) on the command line to display a help text.
I think you can try using the find command with -exec option.
you can execute all python scripts in a directory with the following command.
find . -name "*.py" -exec python {} ';'
Wish this helps.
EDIT:
OH sorry I overlooked that if you obfuscate files seperately they may not run properly, because it renames function names to different names in different files.
Disclaimer: i have searched genericly (Google) and here for some information on this topic, but most of the answers are either very old, or don't seem to quite make sense to me, so I appologize in advance if this seems simple or uninformed.
Problem: My app accepts command line input which may be either a path or a file and I need to determine several things about it.
Is it a path or file,
Is it relative or absolute
Is it readable and / or writable (need read and write test results)(ignoring the possibility of a race situation)
One caveat, while a
try:
file=open(filename,'w')
except OSError as e:
{miscellaneous error handling code here}
would obviously tell me if the parameter (filename in above example) existed/ was writable etc. I do not understand enough about exception codes to know how to interpret the result of exception. It also wouldn't provide the relative/absolute information.
Assuming that there is no one method to do this then I would need to know three things:
How to determine relative / absolute
Is it pointing to a file or a directory
can the EUID of the program read the location, and same for write.
I am seeking to learn from the information I gather here, and I am new to python and have bit off a significant project. I have mastered all of it except this part. Any help would be appreciated. (Pointers to good help sites welcome!)(except docs.python.org that ones bookmarked already ;-) )
Here are your answers. The documentations specifies that the following works for both windows and linux.
How to determine relative / absolute
os.path.isabs returns True if the path is absolute, False if not.
Is it pointing to a file or a directory
Similarly Use os.path.isdir to find out if the path is directory or not.
YOu can use os.path.isfile(path) to find out if the path is file or not.
can the EUID of the program read the location, and same for write.
You can use os.access(path, mode) to know if operations requiring permissions specified by mode are possible on file specified by path or not.
P.S. This will not work for files being accessed over the network. You can use os.stat This is the right way to get all the information. However it is a more costly call and hence, you should try and get all the info in one call . To interpret the results, you can use the stat module
Seems like os.path would take care of most of these needs with isabs(), isfile(), isdir(), etc. Off the top of my head I can't think of a function that will give a simple True/False for read/write access for a given file, but one way to tackle this might be to use the os module's stat function and have a look at the file's permissions and owner.
I am creating a sort of "Command line" in Python. I already added a few functions, such as changing login/password, executing, etc., But is it possible to browse files in the directory that the main file is in with a command/module, or will I have to make the module myself and use the import command? Same thing with changing directories to view, too.
Browsing files is as easy as using the standard os module. If you want to do something with those files, that's entirely different.
import os
all_files = os.listdir('.') # gets all files in current directory
To change directories you can issue os.chdir('path/to/change/to'). In fact there are plenty of useful functions found in the os module that facilitate the things you're asking about. Making them pretty and user-friendly, however, is up to you!
I'd like to see someone write a a semantic file-browser, i.e. one that auto-generates tags for files according to their input and then allows views and searching accordingly.
Think about it... take an MP3, lookup the lyrics, run it through Zemanta, bam! a PDF file, a OpenOffice file, etc., that'd be pretty kick-butt! probably fairly intensive too, but it'd be pretty dang cool!
Cheers,
-C