I have a linux proc entry in /proc/sys/fs/offs/ts/enable which toggles a flag in a custom kernel module. Setting the value to 1 will enable a mode in the module, setting to 0 will disable that mode.
In bash, to enable the mode, I would simply do
echo 1 > /proc/sys/fs/offs/ts/enable
And to disable it,
echo 0 > /proc/sys/fs/offs/ts/enable
I have a daemon written in Python 2.7 which will look for some external event trigger, and when that event fires, should enable or disable the feature in the kernel module. The daemon is run with root privileges, so I shouldn't run into any kind of permissions issues.
Is there a recommended way of setting this value from python?
For example, say my function right now looks like this.
def set_mode(enable=True):
with open('/proc/sys/fs/offs/ts/enable', 'w') as p:
if enable:
p.write("1")
else:
p.write("0")
p.flush()
There are a couple of problems with your code.
Firstly, you want to write to the file, but you're opening it in read mode.
Secondly, .write expects string data, not an integer.
We can get rid of the if test by exploiting the fact that False and True have integer values of 0 & 1, respectively. The code below uses the print function rather than .write because print can convert the integer returned by int(enable) to a string. Also, print appends a newline (unless you tell it not to via the end argument), so this way the Python code performs the same action as your Bash command lines.
def set_mode(enable=True):
with open('/proc/sys/fs/offs/ts/enable', 'w') as p:
print(int(enable), file=p)
If you want to do it with .write, change the print line to:
p.write(str(int(enable)) + '\n')
There's a way to do that conversion from boolean to string in one step: use the boolean to index into a string literal:
'01'[enable]
It's short & fast, but some would argue that it's a little cryptic to use booleans as indices.
Linux exposes the /proc filesystem, as the name says, as files. That means you'll operate on those files like you would do with any other file. Your suggested function is basically perfect regarding how to access /proc, but PM 2Ring's recommendations are definitely valid.
Since it is a low level code, not intended to be portable, I would use os module. It has functions open, write and close that are almost direct wrappers of the C counterparts.
More like C equals less surprises!
Related
I need to run a .tcl file via command line which get invoked with a Python script. However, a single line in that .tcl file needs to change based on input from the user. For example:
info = input("Prompt for the user: ")
Now I need the string contained in info to replace one of the lines in .tcl file.
Rewriting the script is one of the trickier options to pick. It makes things harder to audit and it is tremendously easy to make a mess of. It's not recommended at all unless you take special steps, such as factoring out the bit you set into its own file:
File that you edit, e.g., settings.tcl (simple enough that it is pretty trivial to write and you can rewrite the whole lot each time without making a mess of it)
set value "123"
Use of that file:
set value 0
if {[file readable settings.tcl]} {
source settings.tcl
}
puts "value is $value"
More sophisticated versions of that are possible with safe interpreters and language profiling… but they're only really needed when the settings and the code are in different trust domains.
That said, there are other approaches that are usually easier. If you are invoking the Tcl script by running a subprocess, the easiest ways to pass an arbitrary parameter are to use one of:
A command line argument. These can be read on the Tcl side from the $argv global, which holds a list of all arguments after the script name. (The lindex and lassign commands tend to be useful here, e.g., set value [lindex $argv 0].)
An environment variable. These can be read on the Tcl side from the env global array, e.g., set value $env(MyVarName)
On standard input. A line can be read from that on the Tcl side using set line [gets stdin].
In more complex cases, you'd pass values in their own files, or by writing them into something like an SQLite database, or… well, there's lots of options.
If on the other hand the Tcl interpreter is in the same process, pass the values by setting the variables in it before asking for the script to run. (Tcl has almost no true globals — environment variables are a special exception, and only because the OS forces it upon us — so everything is specific to the interpreter context.)
Specifically, if you've got a Tcl instance object from tkinter (Tk is a subclass of that) then you can do:
import tkinter
interp = tkinter.Tcl()
interp.call("set", "value", 123)
interp.eval("source program.tcl")
# Or interp.call("source", "program.tcl")
That has the advantage of doing all the quoting for you.
My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want python, py_compile, pylint &c to completely ignore the experimental stuff at the end.
I am looking for something like #if 0 for cpp.
How can this be done?
Here are some ideas and the reasons they are bad:
sys.exit(0): works for python but not py_compile and pylint
put all experimental code under def test():: I can no longer copy/paste the code into a python REPL because it has non-trivial indent
put all experimental code between lines with """: emacs no longer indents and fontifies the code properly
comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is pyspark.
Use ipython rather than python for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the %paste or %cpaste magics.
eg.
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
With a normal paste:
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
Using %paste
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
PySpark and IPython
It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:1
$ IPYTHON=1 ./bin/pyspark
Unfortunately, there is no widely (or any) standard describing what you are talking about, so getting a bunch of python specific things to work like this will be difficult.
However, you could wrap these commands in such a way that they only read until a signifier. For example (assuming you are on a unix system):
cat $file | sed '/exit(0)/q' |sed '/exit(0)/d'
The command will read until 'exit(0)' is found. You could pipe this into your checkers, or create a temp file that your checkers read. You could create wrapper executable files on your path that may work with your editors.
Windows may be able to use a similar technique.
I might advise a different approach. Separate files might be best. You might explore iPython notebooks as a possible solution, but I'm not sure exactly what your use case is.
Follow something like option 2.
I usually put experimental code in a main method.
def main ():
*experimental code goes here *
Then if you want to execute the experimental code just call the main.
main()
With python-mode.el mark arbitrary chunks as section - for example via py-sectionize-region.
Than call py-execute-section.
Updated after comment:
python-mode.el is delivered by melpa.
M-x list-packages RET
Look for python-mode - the built-in python.el provides 'python, while python-mode.el provides 'python-mode.
Developement just moved hereto: https://gitlab.com/python-mode-devs/python-mode
I think the standard ('Pythonic') way to deal with this is to do it like so:
class MyClass(object):
...
def my_function():
...
if __name__ == '__main__':
# testing code here
Edit after your comment
I don't think what you want is possible using a plain Python interpreter. You could have a look at the IEP Python editor (website, bitbucket): it supports something like Matlab's cell mode, where a cell can be defined with a double comment character (##):
## main code
class MyClass(object):
...
def my_function():
...
## testing code
do_some_testing_please()
All code from a ##-beginning line until either the next such line or end-of-file constitutes a single cell.
Whenever the cursor is within a particular cell and you strike some hotkey (default Ctrl+Enter), the code within that cell is executed in the currently running interpreter. An additional feature of IEP is that selected code can be executed with F9; a pretty standard feature but the nice thing here is that IEP will smartly deal with whitespace, so just selecting and pasting stuff from inside a method will automatically work.
I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using add -p), and then temporarily stash the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then rebase on top of the non-experimental parts when you need them.
Another possibility is to put tests as doctests into the docstrings of your code, which admittedly is only practical for simpler cases.
This way, they are only treated as executable code by the doctest module, but as comments otherwise.
I have an if statement in a .ini file:
critical= $TREND$>=40
In my script I replace $TREND$ by my value so I have in a variable:
critical = "10.5>=40"
I try to execute this if statement to know if it's True or False.
if critical:
print "It's critical!"
(I know it will just check if critical is empty)
One option is to use eval:
critical = eval("10.5>=40")
However, you must be sure that what is "eval'ed" is completely trusted, because it allows you to execute any arbitrary python code. That is, don't use this if any part of the string is passed in by an "untrusted entity" (user, external api, etc).
Another, possibly better, option might be to use a "template parser" which allows "untrusted" or "sandboxed" execution. For example, the Jinja Sandbox mode.
I would urge you to change your .ini to have following:
critical_low
critical_high
this way you avoid two problems - you don't have to use eval which can be very dangerous, and you don't have logic in your .ini unless you do want logic there which still can be solved without using eval
said that eval will work fine if it's just a hacky script
I'm looking at several cases where it would be far, far, far easier to accept nearly-raw code. So,
What's the worst you can do with an expression if you can't lambda, and how?
What's the worst you can do with executed code if you can't use import and how?
(can't use X == string is scanned for X)
Also, B is unecessary if someone can think of such an expr that given d = {key:value,...}:
expr.format(key) == d[key]
Without changing the way the format looks.
The worst you can do with an expression is on the order of
__import__('os').system('rm -rf /')
if the server process is running as root. Otherwise, you can fill up memory and crash the process with
2**2**1024
or bring the server to a grinding halt by executing a shell fork bomb:
__import__('os').system(':(){ :|:& };:')
or execute a temporary (but destructive enough) fork bomb in Python itself:
[__import__('os').fork() for i in xrange(2**64) for x in range(i)]
Scanning for __import__ won't help, since there's an infinite number of ways to get to it, including
eval(''.join(['__', 'im', 'po', 'rt', '__']))
getattr(__builtins__, '__imp' + 'ort__')
getattr(globals()['__built' 'ins__'], '__imp' + 'ort__')
Note that the eval and exec functions can also be used to create any of the above in an indirect way. If you want safe expression evaluation on a server, use ast.literal_eval.
Arbitrary Python code?
Opening, reading, writing, creating files on the partition. Including filling up all the disk space.
Infinite loops that put load on the CPU.
Allocating all the memory.
Doing things that are in pure Python modules without importing them by copy/pasting their code into the expression (messing with built in Python internals and probably finding a way to access files, execute them or import modules).
...
No amount of whitelisting or blacklisting is going to keep people from getting to dangerous parts of Python. You mention running in a sandbox where "open" is not defined, for example. But I can do this to get it:
real_open = getattr(os, "open")
and if you say I won't have os, then I can do:
real_open = getattr(sys.modules['os'], "open")
or
real_open = random.__builtins__['open']
etc, etc, etc. Everything is connected, and the real power is in there somewhere. Bad guys will find it.
I find myself adding debugging "print" statements quite often -- stuff like this:
print("a_variable_name: %s" % a_variable_name)
How do you all do that? Am I being neurotic in trying to find a way to optimize this? I may be working on a function and put in a half-dozen or so of those lines, figure out why it's not working, and then cut them out again.
Have you developed an efficient way of doing that?
I'm coding Python in Emacs.
Sometimes a debugger is great, but sometimes using print statements is quicker, and easier to setup and use repeatedly.
This may only be suitable for debugging with CPython (since not all Pythons implement inspect.currentframe and inspect.getouterframes), but I find this useful for cutting down on typing:
In utils_debug.py:
import inspect
def pv(name):
record=inspect.getouterframes(inspect.currentframe())[1]
frame=record[0]
val=eval(name,frame.f_globals,frame.f_locals)
print('{0}: {1}'.format(name, val))
Then in your script.py:
from utils_debug import pv
With this setup, you can replace
print("a_variable_name: %s' % a_variable_name)
with
pv('a_variable_name')
Note that the argument to pv should be the string (variable name, or expression), not the value itself.
To remove these lines using Emacs, you could
C-x ( # start keyboard macro
C-s pv('
C-a
C-k # change this to M-; if you just want to comment out the pv call
C-x ) # end keyboard macro
Then you can call the macro once with C-x e
or a thousand times with C-u 1000 C-x e
Of course, you have to be careful that you do indeed want to remove all lines containing pv(' .
Don't do that. Use a decent debugger instead. The easiest way to do that is to use IPython and either to wait for an exception (the debugger will set off automatically), or to provoke one by running an illegal statement (e.g. 1/0) at the part of the code that you wish to inspect.
I came up with this:
Python string interpolation implementation
I'm just testing it and its proving handy for me while debugging.