I am attempting to download a code from github which contains the library "ee" - Google Earth Engine. GitBash is giving me an error:
ModuleNotFoundError: No module named 'fcntl'
fcntl is a module within the library Google Earth Engine. I have Windows and it seems Linux is required. I was directed to add the module (fcntl) to the PYTHONPATH. Any other suggestions for this error would be helpful as well! The code I intend to add in PYTHONPATH is below.
def fcntl(fd, op, arg=0):
return 0
def ioctl(fd, op, arg=0, mutable_flag=True):
if mutable_flag:
return 0
else:
return ""
def flock(fd, op):
return
def lockf(fd, operation, length=0, start=0, whence=0):
return
First, this is probably not going to work for you.
You can't turn Windows to Linux just by adding modules to your Python library. The reason you don't have the fcntl module on your path is that fcntl isn't included on Windows. And the reason it isn't included on Windows is that the Windows OS doesn't support the syscalls that module wraps, or anything close enough to reasonably emulate those syscalls.
If you have code that requires fcntl, that code cannot run on Windows (unless you do some significant work to port it to not require fcntl in the first place).
If you have code that doesn't require fcntl but uses it anyway, or if you just need something for temporary development purposes so you can catch and fix file sharing errors while porting the code to not require fcntl, then you can use msoliman's dummy code, which I'll explain how to do below. But you seem to be expecting it to do magic, and it won't do that.
You may not be sure. Maybe you're using code that uses other code that uses other code that uses fcntl in some scenarios but not others, it may not actually need fcntl to do any of the things you're actually trying to do with it.
If you want to test that, you can take msoliman's dummy code, and change each function body to this:
raise RuntimeError('Oops, using fcntl!')
Then run the program and see if it fails with that error. If not, you don't actually need fcntl after all. (Or at least you don't need it for any of the things you tested—it's always possible that some other thing you need to do with the app that you didn't think to test will need it.)
If your code actually needs fcntl, and you don't want to/can't port that code to Windows code that uses Win32 API calls (or a cross-platform library like portalocker), then what you probably need to do is install Linux and run the program there.
There are multiple ways to run Linux on top of Windows, rather than instead of Windows. For example, you could install Docker for Windows and then build a linux docker container with the app. Or you could use VMWare Player to, in effect, run a Linux image as an application under Windows, and then do your work inside that image. And so on.
Finally, msoliman's "Place this module in your PYTHONPATH" is a little misleading.
What you actually need to do is get it into your sys.path. PYTHONPATH is just one way of doing that, and probably not the one you want here.
The options are:
Just put it in the same directory as your script. As the docs say, "As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter."
Put it in your user or system site packages, or some other directory that's already on your default sys.path. You can import sys; print(sys.path) to get a list of these directories. If you see something inside your home directory, that's a good place to put it; if not, look for something with site-packages in the name.
Put it in some other directory somewhere else, and set the PYTHONPATH environment variable to be the full path to directory. You can set an environment variable in the Windows cmd command prompt by writing SET PYTHONPATH C:\Path\To\Directory. This will only persist as long as the current command prompt window. If you want to set it permanently, there's a setting somewhere in Control Panel (it changes with each Windows version; Super User should have good up-to-date answers for each version) where you can set System and User environment variables. Any User environment variable will take effect in every new command prompt window you open from now on.
Related
From what I've read, any changes to the environment variables in a Python instance are only available within that instance, and disappear once the instance is closed. Is there any way to make them stick by committing them to the system?
The reason I need to do this is because at the studio where I work, tools like Maya rely heavily on environment variables to configure paths across multiple platforms.
My test code is
import os
os.environ['FAKE'] = 'C:\\'
Opening another instance of Python and requesting os.environ['FAKE'] yields a KeyError.
NOTE: Portability will be an issue, but the small API I'm writing will be able to check OS version and trigger different commands if necessary.
That said, I've gone the route of using the Windows registry technique and will simply write alternative methods that will call shell scripts on other platforms as they become requirements.
You can using SETX at the command-line.
By default these actions go on the USER env vars.
To set and modify SYSTEM vars use the /M flag
import os
env_var = "BUILD_NUMBER"
env_val = "3.1.3.3.7"
os.system("SETX {0} {1} /M".format(env_var,env_val))
make them stick by committing them to
the system?
I think you are a bit confused here. There is no 'system' environment. Each process has their own environment as part its memory. A process can only change its own environment. A process can set the initial environment for processes it creates.
If you really do think you need to set environment variables for the system you will need to look at changing them in the location they get initially loaded from like the registry on windows or your shell configuration file on Linux.
Under Windows it's possible for you to make changes to environment variables persistent via the registry with this recipe, though it seems like overkill.
To echo Brian's question, what are you trying to accomplish? There is probably an easier way.
Seems like there is simplier solution for Windows
import subprocess
subprocess.call(['setx', 'Hello', 'World!'], shell=True)
I don't believe you can do this; there are two work-arounds I can think of.
The os.putenv function sets the environment for processes you start with, i.e. os.system, popen, etc. Depending on what you're trying to do, perhaps you could have one master Python instance that sets the variable, and then spawns new instances.
You could run a shell script or batch file to set it for you, but that becomes much less portable. See this article:
http://code.activestate.com/recipes/159462/
Think about it this way.
You're not setting shell environment variables.
You're spawning a subshell with some given environment variable settings; this subshell runs your application with the modified environment.
According to this discussion, you cannot do it. What are you trying to accomplish?
You are forking a new process and cannot change the environment of the parent process as you cannot do if you start a new shell process from the shell
You might want to try Python Win32 Extensions, developed by Mark Hammond, which is included in the ActivePython (or can be installed separately). You can learn how to perform many Windows related tasks in Hammond's and Robinson's book.
Using PyWin32 to access windows COM objects, a Python program can use the Environment Property (a collection of environment variables) of the WScript.Shell object.
Try to use py-setenv that will allow you to set variable via registry
python -m pip install py-setenv
From within Python? No, it can't be done!
If you are not bound to Python, you should consider using shell scripts (sh, bash, etc). The "source" command allows you to run a script that modifies the environment and will "stick" like you want to the shell you "sourced" the script in. What's going on here is that the shell executes the script directly rather creating a sub-process to execute the script.
This will be quite portable - you can use cygwin on windows to do this.
In case someone might need this info. I realize this was asked 7 yrs ago, but even I forget how sometimes. .
Yes there is a way to make them "stick" in windows. Simply go control panel, system, advanced system settings,when the system properties window opens you should see an option (button) for Environment Variables. .The process for getting to this is a little different depending on what OS you're using (google it).
Choose that (click button), then the Environment Variables window will open. It has 2 split windows, the top one should be your "User Variables For yourusername". . .choose "new", then simply set the variable. For instance one of mine is "Database_Password = mypassword".
Then in your app you can access them like this: import os, os.environ.get('Database_Password'). You can do something like pass = os.environ.get('Database_Password').
I have dev environment and production for running various Python scripts used in crontab.
In dev environment, I usually test scripts from commandline in script directory such as "python myscript.py".
Now, some scripts load config from JSON files in the same or subdirectory of script directory.
In dev, I can therefore refer to file just like that:
printconf = Config('printing.json')
However, once the script is ready for production, it is put in crontab, and crontab calls scripts from root, therefore breaking the above line.
Additionally, dev and production are obviously in different places in the filesystem, so I can't even use absolute paths, because they won't be the same.
As described in How can I find script's directory with Python?, I can use various methods to find the file current directory. They mean, however, extra processing and I was wondering,
if any Python version might have (or might have planned) any additional built-in way to tell that the file must be found relatively to running script's directory? Something like __location__?
In essence, something that would work for file references like importing modules already does.
In addition, I have tried to add a global __location__ variable via sitecustomize.py, but that doesn't even work.
sitecustomize.py:
if '__file__' in globals():
import os
_location_ = os.path.join(os.getcwd(), os.path.dirname(__file__))
But that doesn't work either, because:
- __location__ is not passed to the script,
- and __file__ refers to sitecustomize.py
First and foremost, your Dev and Prod environments should not differ too much. This means that for consistency's sake you should use the same setup (filesystem, libraries etc.) for both. This way most of your issues will vanish.
If you do this, than it's safe to use a hardcoded path.
Other options (that inevitably involve some processing, but it should not be deal breakers):
Use os.getcwd() and __file__, as you already suggested.
Use a specific argument when running your scripts (e.g.: $SCRIPT_PATH/myscript.py PROD) and choose your paths based on this inside the script.
Use a dedicated config file just for script init, put it in the same place both on DEV and PROD (I would suggest in /etc/$PROJECT_NAME) and run the script with a specific argument (like mentioned above).
None of the above methods will prevent other issues related to consistency, which makes me underline that you should look into using Docker or Vagrant for your Dev setup (and/or Prod, if possible).
As I reviewed this question, it became clear that I provided probably too much of wrong context and too little of relevant one. As I tested the actual solution, I decided against rewriting the question, and instead put the right context in the answer.
Most of the problems were due to fact that I wanted to determine path of file B, relative to script A, when the actual CALL to use file B used a system-wide module C (called from script A) which was in completely different place, like this:
#module C (system-wide in /usr.../site-packages
import os
class Config()
...
def load(file_name):
# needs to be relative to calling script's dir, not module's!
real_file_name=os.path.join(os.path.dirname(os.path.realpath(sys.argv[0])),file_name)
...
#script-A.py ( in /var/scripts/projectA )
from C import Config
conf=Config()
conf.load('file-in-projectA-dir.json')
# finds /var/scripts/projectA/file-in-projectA-dir.json
# regardless of cwd
So this works both for:
cd /
python3 /var/scripts/projectA/script-A.py
#and:
cd /var/scripts/projectA/
python3 script-A.py
#etc
As you can see, this only works due to fact that script-A is called from command line as first parameter. However, that is my main use case. Not sure, what could work otherwise.
I recently found a fix for Python getpass not working on Windows: Python not working in the command line of git bash
Or at least that was the last thing I remember about changing my python configurations. (This is for Python 3.6.1 on Windows 10)
Now I also use Python to other tasks which simply has subprocess calls to type several commands on terminal:
go build ./folder/
mv ./src/ ./bin/
I get the error: go: GOPATH entry is relative; must be absolute: "/c/Users/OP/work". But I don't get it if I type go build ./src/folder myself.
I have GOPATH set to C:\work in Environment Variables. I have tried with a ;.
Is there a way to reverse the alias python every time? Or what is happening exactly when setting an alias for python to winpty?
I'm thinking that when I call go build directly, it is called by either my user profile or system. And when python's subprocess calls it, it calls the opposite. Therefore, I have two GOPATH variables even though I have only 1 set in environment variable.
Side Note: another recent change on GOPATH was changing it from C:/go because it couldn't be the same as GOROOT. That error popped up randomly for some reason. It worked with that setting for a while and I don't remember changing anything before except adding another import package on top of the many other ones already being used.
Update: with type python I get the result: python is aliased to 'winpty python.exe'. Therefore I tried to undo that with unalias python. The new result I get is: python is hashed (/c/Users/OP/AppData/Local/Programs/Python/Python36/python).
This fixed the go build command within Python's subprocess. However, that alias was a fix for another Python issue with using getpass package.
On top of my unalias python fix, I also discovered something interesting: when I change the environment variables for GOPATH from C:\work; to C:\go, all go commands still spit the error go: GOPATH entry is relative; must be absolute: "". I got this same error (but different path) from updating Windows 10 Fall Creators update. Maybe it is related.
Simply closing MINGW and reopening it fixed the issue. So perhaps it was saying a copy of my environment variables and using that as a reference instead of the actual system properties.
I know this is not a popular question, but someone could benefit from my hours of investigating and debugging.
Under Windows you must use a Windows style GOPATH like eg d:\code and probably you should use cmd shell and nothing else. Unfortunately cygwin paths (and probably others too) do no longer work especially for go get reaching out to git.
Stick to Windows paths and Windows shell.
For writing Python I currently use the excellent PyCharm IDE. I love the way it has code completion so that you often only need to type the first 2 letters and then hit enter.
For easy testing I am of course also often on the command line. The only thing is that I miss the convenient features of the IDE on the command line. Why is there no code completion on the command line? And when I fire up a new Python interactive interpreter, why doesn't it remember commands I inserted earlier (like for example sqlite3 does)?
So I searched around, but I can't find anything like it, or I'm simply not searching for the right words.
So my question; does anybody know of an improved and more convenient version of the Python interactive command line interpreter? All tips are welcome!
bpython is one of the many choices for alternative interactive Python interpreters that sports both of the features you mentioned (tab completion and persistent readline history).
Another very commonly used one would by IPython, though I personally don't like it very much (just a personal preference, many people are very fond of it).
Last but not least you can also enable those features for the standard Python interpreter:
Tab completion: See the docs on the rlcompleter module.
Create a file ~/.pythonrc in your home directory containing this script:
try:
import readline
except ImportError:
print "Module readline not available."
else:
import rlcompleter
readline.parse_and_bind("tab: complete")
This will try to import the readline module, and bind its default completion function to the tab key. In order to execute this script every time you start a Python interpreter, set the environment variable PYTHONSTARTUP to contain the path to this script. How you do this depends on your operating system - on Linux you could do it in your ~/.bashrc for example:
export PYTHONSTARTUP="/home/lukas/.pythonrc"
(The file doesn't need to be called .pythonrc or even be in your home directory - all that matters is that it's the same path you set in PYTHONSTARTUP)
Persistent history: See the .pythonrc file in Marius Gedminas's dotfiles. The concept is the same as above: You add the code that saves and loads the history to your ~/.pythonrc, and configure the PYTHONSTARTUP environment variable to contain the path to that script, so it gets executed every time you start a Python interpreter.
His script already contains the tab completion part. So since you want both, you could save his script called python to ~/.python and add the contents of his bashrc.python to your ~/.bashrc.
(Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables
You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly.