I need to send messages from Python to Pure Data, so I followed this article.
It was working fine until one day suddenly it stopped working.
Pure Data doesn't receive anything anymore and I have tried this on Mac and Linux environment
I have uploaded script plus patch here:
but a equivalent code is:
import os
os.system("echo '1;' | pdsend 3000")
and Puredata should receive the message with a simple
netreceive 3000
Looks like Python and Pure Data can't find a path to communicate.
after i tried so change pdsend with it's absoulute path as follow:
import os
os.system("echo '1;' | /Users/path_to_pure_data/PureData.app/Contents/Resources/bin/pdsend 3000")
it works.
at this point i don't know why Python doesn't automatically find it's path, and how to fix it.
How would Python know that it should look for executables in /Users/path_to_pure_data/PureData.app/Contents/Resources/bin/?
For security (and perfomance) reasons, the system will only search for executables in a few select locations, that are defined in the PATH environment variable.
You could either add /Users/path_to_pure_data/PureData.app/Contents/Resources/bin/ to your PATH, or put pdsend into a path that is already searched for.
However, this seems to be an overkill in any case, as calling an external application is rather costly, and using os.system is probably one of the worst options.
Python is a powerful programming language, and it is very easy to build the client code in Python itself.
Eg this code does not have any external dependencies, performs faster, and is safer (and I'd say. it's easier to read as well):
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 3000))
s.sendall(b'1;')
s.close()
Related
I have a python script that works with another program via API. I need to send the program the directory of a file. The problem I am having is I need to account for the possible difference in paths if the script and program are on different machines and/or OSes. If the script and program are on the same machine it's not an issue. But if they are on different machines, the machine with the script will have a networked path:
script (mapped network drive):
Z:\files\file.txt
program:
/mnt/user/disk1/files/file.txt
So the Z drive points to the mapped networked drive the program has access to.
import pathlib
location = 'Z:\\files\\file.txt'
map_source = 'Z:\\'
map_destination = '/mnt/user/disk1/'
newpath=(location.replace(map_source, ''))
print(pathlib.PurePath(map_destination, newpath))
So if I give the script Z:\files\file.txt as input, it should remove Z:\ from location and replace it with /mnt/user/disk1/ and return /mnt/user/disk1/files/file.txt. The problem is it is returning with the wrong slashes:
\mnt\user\disk1\files\file.txt
How can I get it to determine what the correct slashes should be? My understanding is PurePath will do the correct slash depending on the OS the script is run but I might be running this on a Windows machine and sending the path to a Linux machine. I realize I can probably do this manually with a regex or something but is there some way with libpath or some already existing module? I can't just tell it what to convert it I don't know what system the destination will be. I suppose it would have to parse it and figure it out.
I realize I can probably do this manually with a regex or something but is there some way with libpath or some already existing module?
Use the specific PurePath subclasses? PurePath dispatches between pathlib.PurePosixPath and pathlib.PureWindowsPath internally based on the system it's being run on, but you can use these subclasses directly if you know what's what.
Not that it'll help much, as I don't think there's any real "bridge" between the two: PurePath (and its subclasses) can take path segments as input, but doesn't return path segments.
Also note that your windows paths are broken, \ is an escape character in non-raw python strings so e.g. \f is going to be interpreted as ASCII Formfeed (FF).
Introduction
I recently started working on a legacy product, under Linux, which incorporates a builtin TCL shell. Due to company limitations, I can't gain control to "behind the scenes" and all of the code I can write must be run under this TCL shell, handling the pre-defined clumsy TCL API of the product.
I found myself wondering a few times whether it will be possible to patch some Python into this setup, as some solutions seem more legitimate in Python than in TCL. By patching Python I mean: either calling the Python code from the TCL code itself (which can be done with Elmer, for example), or to use Python from outside of the product to wrap the TCL API (for which I found no "classic" solution).
The Problem
Given that the product already has an existing TCL shell in it, most solutions I browsed through (e.g. Tkinter) can't be used to run the TCL code from Python. I need to "inject" the code to the existing shell.
Solutions Considered
As a solution, I thought about bringing up a simple server on the TCL side which runs simple commands it receives from the Python side. I wrote a small demo and it worked. This enabled me to write a nice, class based, wrapper in Python for the clumsy TCL API, also manage a command queue, etc.
Another two solutions I thought of is forking the software from Python and playing with the read/write file descriptors or connecting through FIFO rather than a socket.
However, I wondered whether I'm actually doing it right or can you suggest a better solution?
Thanks in advance.
First:
If you just want a class based OO system to write your code in, you don't need python, Tcl can do OO just fine. (built in with 8.6, but there are quite a few options to get OO features, classes etc. in older versions too, e.g. Tcllibs SNIT or STOOOP
If you still feel Python is the superior tool for the task at hand (e.g. due to better library support for some tasks), you can 'remote control' the Tcl interpreter using the Tcllib comm package. This needs a working event loop in the Tcl shell you want to control, but otherwise it is pretty simple to do.
In your Tcl shell, install the Tcllib comm package.
(ask again if you need help with that)
Once you have that, start the comm server in your Tcl shell.
package require comm
set id [::comm::comm self]
# write ID to a file
set fd [open idfile.txt w]
puts $fd $id
close $fd
proc stop_server {} {set ::forever 1 }
# enter the event loop
vwait forever
On the python side, you do nearly the same, just in Tkinter code.
Basically like this:
import Tkinter
interp = Tkinter.Tcl()
interp.eval('package require comm')
# load the id
with open('idfile.txt') as fd:
comm_id = fd.read().strip()
result = interp.eval(
'comm::comm send {0!s} {1!s}'.format(comm_id, '{puts "Hello World"}')
Using that python code and your shell, you should see Hello World printed in your shell.
Read the comm manual for more details, how to secure things, get callbacks etc. comm
If you don't like the tkinter burden, you can implement the wire procotol for comm too, should not be too hard in Twisted or with the new async support in Python 3.x.
The on the wire protocol is documented in:
comm wire protocol
I'm writing a simple anti-malware for my end of year school project. I've got some basics nailed down but I would like to add a feature that checks an executable to see if it may attempt to connect to the internet.
Which way should I approach this? Looking at the hex of a program written in C I can see that the libraries included are shown in plain text. Should I look for librairies like socket.h ? Is this reliable?
Note that I'm a second year ethical hacking student so I'm not expected to produce something that rivals professional AV software.
Also I'm programming my AV in Python and demonstrating it under Linux.
I don't think you'll be able to see if socket.h was included, and the relevant functions are in libc, so they will always be available to the application. I would try to see if the application actually calls those functions.
A simple way (in shell) to check if an executable directly calls socket functions:
objdump -D `which wget` | grep '<\(accept\|bind\|connect\|getpeername\|getsockname\|getsockopt\|listen\|recv\|recvfrom\|recvmsg\|send\|sendmsg\|sendto\|setsockopt\|shutdown\|socket\|socketpair\)#'
objdump -D disassembles the executable (including data sections, in case some malicious executable is using some trickery), then grep checks for any calls to the libc functions prototyped in socket.h.
But that only works if the executable itself directly calls the socket functions, which is not always the case. Replace wget with curl and you'll get no results. The reason is that curl's network functionality is all within libcurl.
So, next step: look at the libraries.
ldd `which curl`
Actually, this could be your first step even. If the executable is linked to some obvious networking libraries (i.e. libssl.so.1.0.0), you could stop here. But assuming it isn't, you now have the list of dynamic libraries loaded by the executable. You can use objdump -D on those as well. And disassembling /usr/lib/x86_64-linux-gnu/libcurl.so.4 shows that the library does indeed call the socket functions.
Hopefully this gives you a decent starting point. Besides the tediousness (though that's mitigated by the fact that you're going to write code to do this for you), there is also the issue that ANY external function named the same as the socket functions will show up using my command line. That shouldn't be a big deal if you're ok with erring on the side of false-positives, but there might be a better way to check the functions.
EDIT: This may not work on all binaries. grep finds those function names directly in the executable, which I didn't expect on the distributed wget and curl in Ubuntu.
What you're talking about is a form of Signature Scanning.
You will compare code to known malicious code signatures and see if the code is included in the rogue application.
Say this is a line of code that is known as malicious and compiles to a certain signature that is also know:
Send IP address to hacker
HEX OF
00105e0 e6b0 343b 9c74 0804 e7bc 0804 e7d5 0804
00105f0 e7e4 0804 e6b0 0804 e7f0 0804 e7ff 0804
0010600 e80b 0804 e81a 0804 e6b0 0804 e6b0 0804
Your program could then search the hex of files in an attempt to locate known malicious signatures.
Something more likely you could accomplish in a short amount of time is what is called:
Behavioral Blocking
Think of things a virus or malicious code might try to do to your system, and watch for it.
The previous code pretends to connect out and send an IP address somewhere.
Much like a firewall, you can watch for the attempted connection OUT to be established and alert the user.
You can also monitor for files that are NOT normally modified or accessed to have such happen.
How to send string/data to STDIN of a running process in python?
i'd like to create a front end for a CLI program. eg. i want to pass multiple string to this Pascal application:
program spam;
var a,b,c:string;
begin
while e <> "no" do
begin
writeln('what is your name?');
readln(a);
writeln('what is your quest?');
readln(b);
writeln('what is your favorite color?');
readln(c);
print(a,b,c);
end;
end.
how do i pass string to this program from python (using subprocess module in python). thankyou. sorry for my english.
If you want to control another interactive program, it could be worth trying the Pexpect module to do so. It is designed to look for prompt messages and so on, and interact with the program. Note that it doesn't currently work directly on Windows - it does work under Cygwin.
A possible non-Cygwin Windows variant is WinPexpect, which I found via this question. One of the answers on that question suggests the latest version of WinPexpect is at http://sage.math.washington.edu/home/goreckc/sage/wexpect/, but looking at the modification dates I think the BitBucket (the first link) is actually the latest.
As Windows terminals are somewhat different to Unix ones, I don't think there is a direct cross-platform solution. However, the WinPexpect docs say the only difference in the API between it and pexpect is the name of the spawn function. You could probably do something like the following (untested) code to get it to work in both:
try:
import pexpect
spawn = pexpect.spawn
except ImportError:
import winpexpect
spawn = winpexpect.winspawn
# NB. Errors may occur when you run spawn rather than (or as
# well as) when you import it, so you may have to wrap this
# up in a try...except block and handle them appropriately.
child = spawn('command and args')
I like the python-send-buffer command, however I very often use Python embedded in applications, or launch Python via a custom package management system (to launch Python with certain dependencies).. In other words, I can't just run "python" and get a useful Python instance (something that python-send-buffer relies on)
What I would like to achieve is:
in any Python interpreter (or application that allows you to evaluate Python code), import a magic_emacs_python_server.py module (appending to sys.path as necessary)
In emacs, run magic-emacs-python-send-buffer
This would evaluate the buffer in the remote Python instance.
Seems like it should be pretty simple - the Python module listens on a socket, in a thread. It evaluates in the main thread, and returns the repr() of the result (or maybe captures the stdout/stderr, or maybe both). The emacs module would just send text to the socket, waits for a string in response, and displays it in a buffer.
Sounds so simple something like this must exist already... IPython has ipy_vimserver, but this is the wrong way around. There is also swank, while it seems very Lisp-specific, there is a Javascript backend which looks very like what I want... but searching finds almost nothing, other than some vague (possibly true) claims that SLIME doesn't work nicely with non-Lisp languages
In short:
Does a project exist to send code from an emacs buffer to an existing Python process?
If not, how would you recommend I write such a thing (not being very familiar with elisp) - SWANK? IPython's server code? Simple TCP server from scratch?
comint provides most of the infrastructure for stuff like this. There's a bunch of good examples, like this or this
It allows you to run a command, provides things comint-send-string to easily implement send-region type commands.
dbr/remoterepl on Github is a crude proof-of-concept of what I described in the question.
It lacks any kind of polish, but it mostly works - you import the replify.py module in the target interpreter, then evaluate the emacs-remote-repl.el after fixing the stupid hardcoded path to client.py
Doesn't shell-command give you what you are looking for? You could write a wrapper script or adjust the #! and sys.path appropriately.