I'm French, sorry if my english isn't perfect !
Before starting, if you want to try my code, you can download a pcap sample file here : https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=ipv4frags.pcap
I succeed to open pcap file, read packets and write them to another file with this code :
# Python 3.6
# Scapy 2.4.3
from scapy.utils import PcapReader, PcapWriter
import time
i_pcap_filepath = "inputfile.pcap" # pcap to read
o_filepath = "outputfile.pcap" # pcap to write
i_open_file = PcapReader(i_pcap_filepath) # opened file to read
o_open_file = PcapWriter(o_filepath, append=True) # opened file to write
while 1:
# I will have EOF exception but anyway
time.sleep(1) # in order to see packet
packet = i_open_file.read_packet() # read a packet in file
o_open_file.write(packet) # write it
So now I want to write in a FIFO and see the result in a live Wireshark window.
To do that, I just create a FIFO :
$ mkfifo /my/project/location/fifo.fifo
and launch Wireshark application on it : $ wireshark -k -i /my/project/location/fifo.fifo
I change my filepath in my Python script : o_filepath = "fifo.fifo" # fifo to write
But I have a crash ... Here is the traceback :
Traceback (most recent call last):
File "fifo.py", line 25, in <module>
o_open_file = PcapWriter(o_pcap_filepath, append=True)
File "/home/localuser/.local/lib/python3.6/site-packages/scapy/utils.py", line 1264, in __init__
self.f = [open, gzip.open][gz](filename, append and "ab" or "wb", gz and 9 or bufsz) # noqa: E501
OSError: [Errno 29] Illegal seek
Wireshark also give me an error ("End of file on pipe magic during open") : wireshark error
I don't understand why, and what to do. Is it not possible to write in FIFO using scapy.utils library ? How to do then ?
Thank you for your support,
Nicos44k
Night was useful because I fix my issue this morning !
I didn't undestand the traceback yesterday but it give me in reality a big hint : we have a seek problem.
Wait ... There is no seek in FIFO file !!!
So we cannot set "append" parameter to true.
I changed with : o_open_file = PcapWriter(o_filepath)
And error is gone.
However, packets were not showing in live...
To solve this problem, I needed to force FIFO flush with : o_open_file.flush()
Remember that you can download a pcap sample file here : https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=ipv4frags.pcap
So here is the full code :
# Python 3.6
# Scapy 2.4.3
from scapy.utils import PcapReader, PcapWriter
import time
i_pcap_filepath = "inputfile.pcap" # pcap to read
o_filepath = "fifo.fifo" # pcap to write
i_open_file = PcapReader(i_pcap_filepath) # opened file to read
o_open_file = PcapWriter(o_filepath) # opened file to write
while 1:
# I will have EOF exception but anyway
time.sleep(1) # in order to see packet
packet = i_open_file.read_packet() # read a packet in file
o_open_file.write(packet) # write it
o_open_file.flush() # force buffered data to be written to the file
Have a good day !
Nicos44k
Related
My objective is to split an h.264 stream into multiple parts, meaning while reading the stream from a pipe i would like to save it into x second long packages (in my case 10).
I am using a libcamera-vid subprocess on my Raspberry Pi that outputs the h.264 stream into stdout.
Might be irrelevant, depends: libcamera-vid outputs a message every frame and I am able to locate it at isFrameStopLine
To convert the stream, I use an ffmpeg subprocess, as you can see in the code below.
Imagine it like that:
Stream is running...
- Start recording to a file
- Sleep x seconds
- Finish recording to file
- Start recording a new file
- Sleep x seconds
- Finish recording the new file
- and so on...
Here is my current code, however upon running the first export succeeds, and after the second or third the ffmpeg-subprocess is terminating with the error:
pipe:: Invalid data found when processing input
And shortly after, the python process, because of the ffmpeg termination i believe.
Traceback (most recent call last): File "/home/survpi-camera/main.py", line 56, in <module> processStreamLine(readData) File "/home/survpi-camera/main.py", line 16, in processStreamLine streamInfo["process"].stdin.write(data) BrokenPipeError: [Errno 32] Broken pipe
recentStreamProcesses = []
streamInfo = {
"lastStreamStart": -1,
"process": None
}
def processStreamLine(data):
isInfoLine = ((data.startswith(b"[") and (b"INFO" in data)) or (data == b"Preview window unavailable"))
isFrameStopLine = (data.startswith(b"#") and (b" fps) exp" in data))
if ((not isInfoLine) and (not isFrameStopLine)):
streamInfo["process"].stdin.write(data)
if (isFrameStopLine):
if (time.time() - streamInfo["lastStreamStart"] >= 10):
print("10 seconds passed, exporting...")
exportStream()
createNewStream()
def createNewStream():
streamInfo["lastStreamStart"] = time.time()
streamInfo["process"] = subprocess.Popen([
"ffmpeg",
"-r","30",
"-i","-",
"-c","copy",("/home/survpi-camera/" + str(round(time.time())) + ".mp4")
],stdin=subprocess.PIPE,stderr=subprocess.STDOUT)
print("Created new streamProcess.")
def exportStream():
print("Exporting...")
streamInfo["process"].stdin.close()
recentStreamProcesses.append(streamInfo["process"])
cameraProcess = subprocess.Popen([
"libcamera-vid",
"-t","0",
"--width","1920",
"--height","1080",
"--codec","h264",
"--inline",
"--listen",
"-o","-"
],stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.STDOUT)
createNewStream()
while True:
readData = cameraProcess.stdout.readline()
processStreamLine(readData)
Thank you in advance!
I'm trying to pipe output from a cURL to input for a Python module through the following line in CMD:
curl https://api.particle.io/v1/devices/e00fce68515bfa5f850de016/events?access_token=ae40788c6dba577144249fec95afdeadb18e6bec | pythonmodule.py
When curl is run by itself (without "| pythonmodule.py" it streams data continuously every 30 seconds (it's connected to an Argon IoT with a temperature and humidity sensor) printing the real time temperature and humidity perfectly. But when I try to redirect the output via the | it only seems to work once, it doesn't continuously run the pythonmodule which it should every-time where new data is provided.
I tried to use the library requests.get() but since it's a continuous stream it seems to freeze on the get().
Can someone explain how this cURL stream actually works?
Concerning freezing on requests continuous stream you can use Body Content Workflow from requests to avoid waiting for the whole content to download at once:
with requests.get('your_url', stream=True) as response:
for line in response.iter_lines(decode_unicode=True):
if line:
print(line)
Output:
:ok
event: SensorVals
data: {"data":"{humidity: 30.000000, temp: 24.000000}","ttl":60,"published_at":"2019-11-28T13:53:04.592Z","coreid":"e00fce68515bfa5f850de016"}
event: SensorVals
data: {"data":"{humidity: 29.000000, temp: 24.000000}","ttl":60,"published_at":"2019-11-28T13:53:34.604Z","coreid":"e00fce68515bfa5f850de016"}
...
https://requests.readthedocs.io/en/master/user/advanced/#body-content-workflow
I'm assuming here that with "only seems to work once" you mean that the command exits after the data is received for the first time. It might be that your python script stops reading after that first line.
Looping over the stdin might solve your issue:
import sys
for n, line in enumerate(sys.stdin):
if line.strip() != "":
print(n, line)
Using a command like :
curl -sN https://api.particle.io/v1/devices/e00fce68515bfa5f850de016/events?access_token=ae40788c6dba577144249fec95afdeadb18e6bec | python blah.py
Will result in:
0 :ok
3 event: SensorVals
4 data: {"data":"{humidity: 30.000000, temp: 24.000000}","ttl":60,"published_at":"2019-11-28T13:50:34.459Z","coreid":"e00fce68515bfa5f850de016"}
9 event: SensorVals
10 data: {"data":"{humidity: 30.000000, temp: 24.000000}","ttl":60,"published_at":"2019-11-28T13:51:04.608Z","coreid":"e00fce68515bfa5f850de016"}
^CTraceback (most recent call last):
File "blah.py", line 3, in <module>
for n, line in enumerate(sys.stdin):
KeyboardInterrupt
My raspberry pi is connected to microcontroller over serial pin. I am trying to read the data from the serial port. The script reads the data for few seconds. However, it terminates throwing following exception
serial.serialutil.SerialException: device reports readiness to read but returned no data (device disconnected?)
I have used following python code
#!/usr/bin/python
import serial
import time
serialport = serial.Serial("/dev/ttyAMA0", 115200, timeout=.5)
while 1:
response = serialport.readlines(None)
print response
time.sleep(.05)
serialport.close()
Here is the code you should be using if you are seriously trying to just transfer and print a file:
for line in serialport.readlines().split('\n'):
print line
------------------------------------------------------------
I believe you are having problems because you are using readlines(None) instead of readline() Readline() reads it a line at a time, and will wait for each one. If reading a whole file it will be slower than readlines. But readlines() expects a whole file all at once. It is obviously not waiting for your serial transfer speed.
--------------------------------------------------
My data-logging loop receives a line every two minutes and writes it to a file. It could easily just print each line like you show in the OP.
readine() waits for each line. I have tested it to wait up to 30 minutes between lines with no problems by altering the program on the Nano.
import datetime
import serial
ser = serial.Serial("/dev/ttyUSB0",9600) --/dev/ACM0 is fine
while True :
linein = ser.readline()
date = str(datetime.datetime.now().date())
date = date[:10]
time = str(datetime.datetime.now().time())
time = time[:8]
outline = date + tab + time + tab + linein
f = open("/home/pi/python/today.dat","a")
f.write(outline)
f.close()
Maybe changing to this approach would be better for you.
I've got problem trying open .pcap file. In scapy.utils there is RawPcapReader
try:
self.f = gzip.open(filename,"rb")
magic = self.f.read(4)
except IOError:
self.f = open(filename,"rb")
magic = self.f.read(4)
if magic == "\xa1\xb2\xc3\xd4": #big endian
self.endian = ">"
elif magic == "\xd4\xc3\xb2\xa1": #little endian
self.endian = "<"
else:
raise Scapy_Exception("Not a pcap capture file (bad magic)")
hdr = self.f.read(20)
if len(hdr)<20:
raise Scapy_Exception("Invalid pcap file (too short)")
My magic has value "\n\r\r\n" but RawPcapReader is expecting magic == "\xa1\xb2\xc3\xd4" or magic == "\xd4\xc3\xb2\xa1".
Could you tell me what can be the problem? With .pcap file? I'm using python version 2.7
The magic value of "\n\r\r\n" (\x0A\x0D\x0D\x0A) indicates that your file is actually in .pcapng format, rather than libpcap
The solution is simple
In Wireshark 'Save As': Wireshark/tcpdump - pcap
Or use tshark:
$tshark -r old.pcapng -w new.pcap -F libpcap
As an alternative to saving the file in pcap format, scapy now has PcapNgReader so you could do:
mypcap = PcapNgReader(filename)
I have been pulling my hair out trying to get a proxy working. I need to decrypt the packets from a server and client ((this may be out of order..)), then decompress everything but the packet header.
The first 2 packets ((10101 and 20104)) are not compressed, and decrypt, destruct, and decompile properly.
Alas, but to no avail; FAIL!; zlib.error: Error -5 while decompressing data: incomplete or truncated stream
Same error while I am attempting to decompress the encrypted version of the packet.
When I include the packet header, I get a randomly chosen -3 error.
I have also tried changing -zlib.MAX_WBITS to zlib.MAX_WBITS, as well as a few others, but still get the same error.
Here's the code;
import socket, sys, os, struct, zlib
from Crypto.Cipher import ARC4 as rc4
cwd = os.getcwd()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ss = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('192.168.2.12',9339))
s.listen(1)
client, addr = s.accept()
key = "fhsd6f86f67rt8fw78fw789we78r9789wer6renonce"
cts = rc4.new(key)
stc = rc4.new(key)
skip = 'a'*len(key)
cts.encrypt(skip)
stc.encrypt(skip)
ss.connect(('game.boombeachgame.com',9339))
ss.settimeout(0.25)
s.settimeout(0.25)
def io():
while True:
try:
pack = client.recv(65536)
decpack = cts.decrypt(pack[7:])
msgid, paylen = dechead(pack)
if msgid != 10101:
decopack = zlib.decompress(decpack, -zlib.MAX_WBITS)
print "ID:",msgid
print "Payload Length",paylen
print "Payload:\n",decpack
ss.send(pack)
dump(msgid, decpack)
except socket.timeout:
pass
try:
pack = ss.recv(65536)
msgid, paylen = dechead(pack)
decpack = stc.decrypt(pack[7:])
if msgid != 20104:
decopack = zlib.decompress(decpack, -zlib.MAX_WBITS)
print "ID:",msgid
print "Payload Length",paylen
print "Payload:\n",decpack
client.send(pack)
dump(msgid, decpack)
except socket.timeout:
pass
def dump(msgid, decpack):
global cwd
pdf = open(cwd+"/"+str(msgid)+".bin",'wb')
pdf.write(decpack)
pdf.close()
def dechead(pack):
msgid = struct.unpack('>H', pack[0:2])[0]
print int(struct.unpack('>H', pack[5:7])[0])
payload_bytes = struct.unpack('BBB', pack[2:5])
payload_len = ((payload_bytes[0] & 255) << 16) | ((payload_bytes[1] & 255) << 8) | (payload_bytes[2] & 255)
return msgid, payload_len
io()
I realize it's messy, disorganized and very bad, but it all works as intended minus the decompression.
Yes, I am sure the packets are zlib compressed.
What is going wrong here and why?
Full Traceback:
Traceback (most recent call last):
File "bbproxy.py", line 68, in <module>
io()
File "bbproxy.py", line 33, in io
decopack = zlib.decompress(decpack, zlib.MAX_WBITS)
zlib.error: Error -5 while decompressing data: incomplete or truncated stream
I ran into the same problem while trying to decompress a file using zlib with Python 2.7. The issue had to do with the size of the stream (or file input) exceeding the size that could be stored in memory. (My PC has 16 GB of memory, so it was not exceeding the physical memory size, but the buffer default size is 16384.)
The easiest fix was to change the code from:
import zlib
f_in = open('my_data.zz', 'rb')
comp_data = f_in.read()
data = zlib.decompress(comp_data)
To:
import zlib
f_in = open('my_data.zz', 'rb')
comp_data = f_in.read()
zobj = zlib.decompressobj() # obj for decompressing data streams that won’t fit into memory at once.
data = zobj.decompress(comp_data)
It handles the stream by buffering it and feeding in into the decompressor in manageable chunks.
I hope this helps to save you time trying to figure out the problem. I had help from my friend Jordan! I was trying all kinds of different window sizes (wbits).
Edit: Even with the below working on partial gz files for some files when I decompressed I got empty byte array and everything I tried would always return empty though the function was successful. Eventually I resorted to running gunzip process which always works:
def gunzip_string(the_string):
proc = subprocess.Popen('gunzip',stdout=subprocess.PIPE,
stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)
proc.stdin.write(the_body)
proc.stdin.close()
body = proc.stdout.read()
proc.wait()
return body
Note that the above can return a non-zero error code indicating that the input string is incomplete but it still performs the decompression and hence the stderr being swallowed. You may wish to check errors to allow for this case.
/edit
I think the zlib decompression library is throwing an exception because you are not passing in a complete file just a 65536 chunk ss.recv(65536). If you change from this:
decopack = zlib.decompress(decpack, -zlib.MAX_WBITS)
to
decompressor = zlib.decompressobj(-zlib.MAX_WBITS)
decopack = decompressor(decpack)
it should work as that way can handle streaming.
A the docs say
zlib.decompressobj - Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once.
or even if it does fit into memory you might just want to do the beginning of the file
Try this:
decopack = zlib.decompressobj().decompress(decpack, zlib.MAX_WBITS)