Tcpdump : Unable to capture packets from Python script - python

I'm trying to open Tcpdump to capture UDP packets from a Python script. Here is my code:
os.system("tcpdump -i wlp2s0 -n dst 8.8.8.8 -w decryptedpackets.pcap &")
testfile = urllib.URLopener()
s = socket(AF_INET, SOCK_DGRAM)
host = "8.8.8.8"
port = 5000
buf = 1024
addr = (host, port)
s.connect((host, port))
f = open("file.txt", "rb")
data = f.read(buf)
while (data):
if (s.sendto(data, addr)):
print "sending ..."
data = f.read(buf)
I am able to capture the packets (pcap file has content) if I manually execute this command in shell:
tcpdump -i wlp2s0 -n dst 8.8.8.8 -w decryptedpackets.pcap &
However, If I use os.system() I can't capture the packets. ( When I open the pcap file, I find it empty)
I have verified and found that there is a process that gets created when the Python script is executed:
root 10092 0.0 0.0 17856 1772 pts/19 S 10:25 0:00
tcpdump -i wlp2s0 -n dst 8.8.8.8 -w decryptedpackets.pcap
Also, I'm running this as a sudo user to avoid any permission problems.
Any suggestions what could be causing this problem ?

From python documentation.
os.system(command) Execute the command (a string) in a subshell. This
is implemented by calling the Standard C function system(), and has
the same limitations. Changes to sys.stdin, etc. are not reflected in
the environment of the executed command.
On Unix, the return value is the exit status of the process encoded in
the format specified for wait(). Note that POSIX does not specify the
meaning of the return value of the C system() function, so the return
value of the Python function is system-dependent.
On Windows, the return value is that returned by the system shell
after running command, given by the Windows environment variable
COMSPEC: on command.com systems (Windows 95, 98 and ME) this is always
0; on cmd.exe systems (Windows NT, 2000 and XP) this is the exit
status of the command run; on systems using a non-native shell,
consult your shell documentation.
The subprocess module provides more powerful facilities for spawning
new processes and retrieving their results; using that module is
preferable to using this function. See the Replacing Older Functions
with the subprocess Module section in the subprocess documentation for
some helpful recipes.
I think that os.system returns immediately and the script keeps going, there's no problem with the code but you probably need to create a separate thread and call os.system with the tcp-dump since I believe that it is returning immediately.

did you use the -w switch too when running from the command line instead of the script? If not your problem might be buffering and you should have a look at the -U option. Apart from that the -w switch should be used before the capture expression, i.e. the expression should be the last thing. In summary: tcpdump -i wlp2s0 -n -w out.pcap -U dst 8.8.8.8
– Steffen Ullrich

Related

Pass arguments to a bash script stored locally and needs to be executed on a remote machine using Python Paramiko

I have a shell script stored on my local machine. The script needs arguments as below:
#!/bin/bash
echo $1
echo $2
I need to run this script on a remote machine (without copying the script on the remote machine). I am using Python's Paramiko module to run the script and can invoke on the remote server without any issue.
The problem is I am not able to pass the two arguments to the remote server. Here is the snippet from my python code to execute the local script on the remote server:
with open("test.sh", "r") as f:
mymodule = f.read()
c = paramiko.SSHClient()
k = paramiko.RSAKey.from_private_key(private_key_str)
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
c.connect( hostname = "hostname", username = "user", pkey = k )
stdin, stdout, stderr = c.exec_command("/bin/bash - <<EOF\n{s}\nEOF".format(s=mymodule))
With bash I can simply use the below command:
ssh -i key user#IP bash -s < test.sh "$var1" "$var2"
Can someone help me with how to pass the two arguments to the remote server using Python?
Do the same, what you are doing in the bash:
command = "/bin/bash -s {v1} {v2}".format(v1=var1, v2=var2)
stdin, stdout, stderr = c.exec_command(command)
stdin.write(mymodule)
stdin.close()
If you prefer the heredoc syntax, you need to use the single quotes, if you want the argument to be expanded:
command = "/bin/bash -s {v1} {v2} <<'EOF'\n{s}\nEOF".format(v1=var1,v2=var1,s=mymodule)
stdin, stdout, stderr = c.exec_command(command)
The same way as you would have to use the quotes in the bash:
ssh -i key user#IP bash -s "$var1" "$var2" <<'EOF'
echo $1
echo $2
EOF
Though as you have the script in a variable in your Python code, why don't you just modify the script itself? That would be way more straightforward, imo.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".

How to know whether a copying process(scp) is complete using python?

I have a python program for log analysis.
The log is in another server which has a port number and password.
I cannot store my python code in that server. So I need to scp the file to the server where my program is stored.
I did this:
popen('''sshpass -p "password" scp -r \
admin#192.158.11.109:/home/admin/DontDeleteMe/%s /home/admin/''' % fileName)
But if the file is big the program will run before completing the copying process.
popen() does not wait for the process to complete. You can use subprocess.call():
exitcode = subprocess.call('''sshpass -p "password" scp -r \
admin#192.158.11.109:/home/admin/DontDeleteMe/%s /home/admin/''' % fileName,
shell=True)
According to Python's doc:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*

Cannot send commands to Docker unix socket via Python

I have a script that send signals to containers using nc (specifically the openbsd version that supports -U for Unix domain sockets):
echo -e "POST /containers/$HAPROXY_CONTAINER/kill?signal=HUP HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock
I wanted to see if I could avoid the openbsd nc dependency or a socat dependency, so I tried to do the same thing in Python 3 with the following:
import socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect('/var/run/docker.sock')
sock.sendall(str.encode('POST /containers/{}/kill?signal=HUP HTTP/1.0\r\n'.format(environ['HAPROXY_CONTAINER'])))
I don't get any errors raised from the Python version, however my container doesn't receive the signal I'm attempting to send.
In the bash version, echo provides an additional new line. HTTP requires two new lines after the headers, so the Python sendall needed a second \n like so:
sock.sendall(str.encode('POST /containers/{}/kill?signal=HUP HTTP/1.0\r\n\n'.format(environ['HAPROXY_CONTAINER'])))

Filter output of a process with `grep` while keeping the return value

I think this is not a Python question but in order to provide the context I'll tell, what exactly I'm doing.
I run a command on a remote machine using ssh -t <host> <command> like this:
if os.system('ssh -t some_machine [ -d /some/directory ]') != 0:
do_something()
(note: [ -d /some/directory ] is only an example. Could be replaced by any command which returns 0 in case everything went fine)
Unfortunately ssh prints "Connection to some_machine close." every time I run it.
Stupidly I tried to run ssh -t some_machine <command> | grep -v "Connection" but this returns the result of grep of course.
So in short: In Python I'd like to run a process via ssh and evaluate it's return value while filtering away some unwanted output.
Edit: this question suggests s.th. like
<command> | grep -v "bla"; return ${PIPESTATUS[0]}
Indeed this might be an approach but it seems to work with bash only. At least with zsh PIPESTATUS seems to be not defined.
Use subprocess, and connect the two commands in Python rather than a shell pipeline.
from subprocess import Popen, PIPE, call
p1 = Popen(["ssh", "-t", "some_machine", "test", "-d", "/some/directory"],
stdout=PIPE)
if call(["grep", "-v", "Connection"], stdin=p1.stdout) != 0:
# use p1.returncode for the exit status of ssh
do_something()
Taking this a step further, try to avoid running external programs when unnecessary. You can examine the output of ssh directly in Python without using grep; for example, using the re library to examine the data read from p1.stdout yourself. You can also use a library like Paramiko to connect to the remote host instead of shelling out to run ssh.

Killing process on Ubuntu with os.system() issue

I try to send command from python shell to Ubuntu OS to define process existed on particular port and kill it:
port = 8000
os.system("netstat -lpn | grep %s" % port)
Output:
tcp 0 0 127.0.0.1.8000 0.0.0.0:* LISTEN 22000/python
Then:
os.system("kill -SIGTERM 22000")
but got following trace
sh: 1: kill: Illegal option -S
For some reason command can not be transferred to OS with full signal -SIGTERM, but only with -S. I can simply kill this process directly from Terminal, so seems that it's Python or os issue... How can I run kill command using Python?
Any help is appreciated
os.system uses sh to execute the command, not bash which you get in a terminal. The kill builtin in sh requires giving the signal names without the SIG prefix. Change your os.system command line to kill -TERM 22000.
[EDIT] As #DJanssens suggested, using os.kill is a better option than calling the shell for such a simple thing.
You could try using
import signal
os.kill(process.pid, signal.SIGKILL)
documentation can be found here.
you could also use signal.CTRL_C_EVENT, which corresponds to the CTRL+C keystroke event.

Categories

Resources