I am thinking about making a program that will need to send input and take output from the various aircrack-ng suite tools. I know of a couple of python modules like subprocess, envoy, sarge and pexpect that would provide the necessary functionality. Can anyone advise on what I should be using or not using, especially as I'm new to python.
Thanks
As the maintainer of sarge, I can tell you that its goals are broadly similar to envoy (in terms of ease of use over subprocess) and there is (IMO) more functionality in sarge with respect to:
Cross-platform support for bash-like syntax (e.g.use of &&, ||, & in command lines)
Better support for capturing subprocess output streams and working with them asynchronously
More documentation, especially about the internals and peripheral issues like threading+forking in the context of using subprocess
Support for prevention of shell injection attacks
Of course YMMV, but you can check out the docs, they're reasonably comprehensive.
pexpect
In 2015, pexpect does not work on windows. Rumored to add "experimental" support in the next version, but this has been a rumor for a long time (I'm not holding my breath).
Having written many applications using pexpect (and loving it), I am now sorry because one of the things I love about python (that it is cross platform) is not true for my applications.
If you plan to ever add windows support, for the moment, avoid pexpect.
envoy
Not much activity in the last year. And few commits (12 total) since 2012. Not very promising for its future.
Internally it uses shlex in a way that is not compatible with windows paths (the commands must use '/' not '\' for directory separators). A workaround (when using pathlib) is to call as_posix() on path objects before passing them as commands. See this answer.
Getting access to the internal streams (i.e. I want to parse the output to have some updating scrollbars), seems possible but is not documented.
sarge
Works on windows out-of-the-box and has an expect() method that should provide functionality similar to pexpect (allowing me to update a scrollbar). Recent activity, but it is hosted on gitlab and bitbucket (very confusing).
Personal Conclusion
I'm moving from pexpect to sarge for future development. Seems to provide similar feature set to pexpect and supports windows.
subprocess - is a standard library module, so it'll be available with python installation. But it has a reputation of hard to use since it's api is non-intuitive.
envoy - is a third party module that wraps around subprocess. It was written to be an easy to use alternative to subprocess. The author of envoy Kenneth Reitz is famous for his Python for Humans philosophy.
I'm not familiar with the other two.
Related
Hi I have a simple pyQt text editor,
Essentially I want to add mercurial support
I have seen in various other editors the ability to support a number of DVCS (Mercurial, GIT,Bazaar, etc), and they give the user the ability to perform functions like commit,update, etc
I really want to know what/how I can integrate mercurial in my pyQt text editor, so that it behaves more or less like other fancy Editors.
Any good tutorials/guides on how to get this done
There are no tutorials around this, generally however there are three approaches:
Command line interface
Mercurials command line interface is considered stable. That means that you can expect Mercurial without extensions to not change the output of a command. Using "-T json" for most commands will also result in an easily parsable Json output. This approach is robust and fairly easy to implement as you only have to call out to Mercurial and parse the json back. Most standard commands like commit, log, etc should be implementable using this
hglib
Mercurial is offering hglib. A library that is available in C and Python which allows you to interface with Mercurial via a local protocol. Mercurial will be started in server mode and you use the library to interact. This approach is also very stable, offers a better abstraction, but relies on the command server being available and implies potential API changes in the library. Note that you also have to take the license of the library into account as you are linking against them.
Embedding Mercurial
Python processes can embedd Mercurial directly by important the right modules. However the Mercurial API is internally not stable and subject to continuous change. This option offers you the most flexibility as you have access to everything, including low-level parsing of datastructures, exposing of hidden functionality such as obsolence markers. The drawbacks are: 1. you have to know what to do otherwise you might corrupt the repository 2. the api changes all the time 3. you are subject to the GPL license.
For to offer interactive examples about data analysis, I'd like to embed an interactive python shell. It does not necessarily have to be a real Python shell. Users shall be given tasks that they can execute in the shell. This is similar to existing tutorials, as seen on, e.g., http://www.codecademy.org, but I'd like to work with libraries that those solutions do not offer, as far as I understood.
In order to get a real shell on the website, I think of two approaches:
I found projects like http://www.repl.it, but it seems rather difficult to include the necessary libraries like SciPy, NumPy, and Pandas. In addition, user input has to be validated and I'm not sure whether that works with those shells I found.
I could pipe the commands through a web applications to a Python installation on my server, but I'm scared of using eval() on foreign, arbitrary code. Is there a safe mode for Python? I found http://www.pypy.org. Although they offer a Python sandbox, unfortunately, they do not support the libraries I need.
Alternatively, I thought of just embedding a "fake shell", which I build to copy the behaviour of the functions that I want to explain. Of course, this would result in more work, as I would have to write a fake interface, but for now it seems to be the only possibility.
I hope that this question is not too generic; I'm looking for either a good HTML/JS library that helps me put a fake shell on my website or a library/service/software that can embed a real Python shell with the required modules installed.
There is no way to run untrusted Python safely; Python's dynamic nature allows for too many ways to break through any protective layers you could care to think of.
Instead, run each session on a new virtual machine, properly locked down (firewalled, unprivileged user), which you shut down after a hard time limit. New sessions get a new, clean virtual machine.
This isolates you from any malicious code that might run and try to break out of a sandbox; a good virtual machine is hardware-isolated by the processor from the host OS, something a Python-only layer could never achieve.
This process is sometimes called sandboxing.
You can find some good information on the python wiki
There are basically three options available:
machine-level mechanisms (such as a VM, as Martijn Pieters suggested)
OS-level mechanisms (such as a chroot or SELinux)
custom interpreters, such as pypy (which has sandboxing capabilities, as you mentioned), or Jython, where you may be able to use the Java security manager or applet mechanisms.
You may also want to check Restricted Python, which is especially useful for very restricted environments, but security will depend on its configuration.
Ultimately, your choice of solution will depend on what you want to restrict:
Filesystem access? Block everything, or allow certain directories?
Network access, such as sockets?
Arbitrary system calls?
I need to send code to remote clients to be executed in them but security is a concern for me right now. I don't want unsafe code to be executed there so I would like to control what a program is doing. I mean for example, know if is making connections, where is connecting to, if is reading local files, etc. Is this possible with Python?
EDIT: I'm thinking in something similar to Android permission system. I want to know what a code will do and if it does something different, stop it.
You could use a different Python runtime:
if you run your script using Jython; you can exploit Java's permission system
with Pypy's sandboxed version you can choose what is allowed to run in your controller script
There used to be a module in Python called bastian, but that was deprecated as it wasn't that secure. There's also I believe something called RPython, but I don't know too much about that.
I would in this case use Pyro and write the code on the target server. That way you know clients can only execute written and tested code.
edit - it's probably worth noting that Pyro also supports http://en.wikipedia.org/wiki/Privilege_separation - although I've not had to use it for that.
I think you are looking for a sandboxed python. There used to be an effort to implement this, but it has been abolished a couple of years ago.
Sandboxed python in the python wiki offers a nice overview of possible options for your usecase.
The most rigourous (but probably the slowest) way is to run Python on a bare OS in an emulator.
Depending on the OS you use, there are several ways of running programs with restrictions, but without the overhead of an emulator:
FreeBSD has a nice integrated solution in the form of jails.
These grew out of the chroot system call.
Linux-VServer aims to do more or less the same on Linux.
Rake is a software build tool written in Ruby (like Ant or Make), and so all its files are written in this language. Does something like this exist in Python?
Invoke — Fabric without the SSH dependencies.
The Fabric roadmap discusses that Fabric 1.x will be split into three portions:
Invoke — The non-SSH task execution.
Fabric 2.x — The remote execution and deployment library that utilizes Invoke.
Patchwork — The "common deployment/sysadmin operations, built on Fabric."
Invoke is a Python (2.6+ and 3.3+) task execution tool & library, drawing inspiration from various sources to arrive at a powerful & clean feature set.
Below are a few descriptive statements from Invoke's website:
Invoke is a Python (2.6+ and 3.3+) task execution tool & library, drawing inspiration from various sources to arrive at a powerful & clean feature set.
Like Ruby’s Rake tool and Invoke’s own predecessor Fabric 1.x, it provides a clean, high level API for running shell commands and defining/organizing task functions from a tasks.py file.
Paver has a similar set of goals, though I don't really know how it compares.
Shovel seems promising:
Shovel — Rake for Python
https://github.com/seomoz/shovel
Waf is a Python-based framework for configuring, compiling and installing applications. It derives from the concepts of other build tools such as Scons, Autotools, CMake or Ant.
There is also doit - I came across it while looking for these things a while ago, though I didn't get very far with evaluating it.
Although it is more commonly used for deployment, Fabric might be interesting for this use case.
Also check out buildout, which isn't so much a make system for software, as a make system for a deployment.
http://pypi.python.org/pypi/pysqlite/2.5.5
So it's not a direct rake equivalent, but may be a better match for what you want to do, or a really lousy one.
There is Phantom in Boo (which isn't Python, but nearly).
I would check out distutils:
The distutils package provides support
for building and installing additional
modules into a Python installation.
The new modules may be either
100%-pure Python, or may be extension
modules written in C, or may be
collections of Python packages which
include modules coded in both Python
and C.
I would like seek some guidance in writing a "process profiler" which runs in kernel mode. I am asking for a kernel mode profiler is because I run loads of applications and I do not want my profiler to be swapped out.
When I said "process profiler" I mean to something that would monitor resource usage by the process. including usage of threads and their statistics.
And I wish to write this in python. Point me to some modules or helpful resource.
Please provide me guidance/suggestion for doing it.
Thanks,
Edit::: Would like to add that currently my interest isto write only for linux. however after i built it i will have to support windows.
It's going to be very difficult to do the process monitoring part in Python, since the python interpreter doesn't run in the kernel.
I suspect there are two easy approaches to this:
use the /proc filesystem if you have one (you don't mention your OS)
Use dtrace if you have dtrace (again, without the OS, who knows.)
Okay, following up after the edit.
First, there's no way you're going to be able to write code that runs in the kernel, in python, and is portable between Linux and Windows. Or at least if you were to, it would be a hack that would live in glory forever.
That said, though, if your purpose is to process Python, there are a lot of Python tools available to get information from the Python interpreter at run time.
If instead your desire is to get process information from other processes in general, you're going to need to examine the options available to you in the various OS APIs. Linux has a /proc filesystem; that's a useful start. I suspect Windows has similar APIs, but I don't know them.
If you have to write kernel code, you'll almost certainly need to write it in C or C++.
don't try and get python running in kernel space!
You would be much better using an existing tool and getting it to spit out XML that can be sucked into Python. I wouldn't want to port the Python interpreter to kernel-mode (it sounds grim writing it).
The /proc option does sound good.
some code code that reads proc information to determine memory usage and such. Should get you going:
http://www.pixelbeat.org/scripts/ps_mem.py reads memory information of processes using Python through /proc/smaps like charlie suggested.
Some of your comments on other answers suggest that you are a relatively inexperienced programmer. Therefore I would strongly suggest that you stay away from kernel programming, as it is very hard even for experienced programmers.
Why would you want to write something that
is a very complex system (just look at existing profiling infrastructures and how complex they are)
can not be done in python (I don't know any kernel that would allow execution of python in kernel mode)
already exists (oprofile on Linux)
have you looked at PSI? (http://www.psychofx.com/psi/)
"PSI is a Python module providing direct access to real-time system and process information. PSI is a Python C extension, providing the most efficient access to system information directly from system calls."
it might give you what you are looking for. .... or at least a starting point.
Edit 2014:
I'd recommend checking out psutil instead:
https://pypi.python.org/pypi/psutil
psutil is actively maintained and has some nifty process monitoring features. PSI seems to be somewhat dead (last release 2009).