Hi guys I have two sections of code, what is confusing me, is although they should be equivalent I am not receiving the same results when running both of these code, what is the difference between them ?
Here is the first and working section :
packet = struct.pack(">BHHLH", relayCmd, 0, streamId, 0, len(payload)) + payload
and second non working section :
# packet = struct.pack(">B", relayCmd)
# packet += struct.pack("H", 0)
# packet += struct.pack("H", streamId)
# packet += struct.pack("L", 0)
# packet += struct.pack("H", len(payload))
# packet += payload
In the first version you specify the format to Big Endian by ">" and then all of the formats parameters are encoded this way. In the second example you specify the Big Endian only in the first line and then all of the other parameters are encoded using native encoding of the system ("#" is used as default).
From the struct documentation:
Note: By default, the result of packing a given C struct includes pad bytes in order to maintain proper alignment for the C types involved; similarly, alignment is taken into account when unpacking. This behavior is chosen so that the bytes of a packed struct correspond exactly to the layout in memory of the corresponding C struct. To handle platform-independent data formats or omit implicit pad bytes, use standard size and alignment instead of native size and alignment: see Byte Order, Size, and Alignment for details.
You used the default # alignment (using native alignment) when you didn't specify the alignment for the 4 additional lines. You only used > standard alignment for the first relayCmd codepoint.
As a result, the sizes produced are different:
>>> import struct
>>> struct.calcsize('>BHHLH')
11
>>> struct.calcsize('>B')
1
>>> struct.calcsize('H')
2
>>> struct.calcsize('L')
8
>>> 1 + 3 * 2 + 8
15
The difference is in the padded L; if you use the > big endian marker for all pack() calls it only takes four bytes:
>>> struct.calcsize('>L')
4
So this works:
packet = struct.pack(">B", relayCmd)
packet += struct.pack(">H", 0)
packet += struct.pack(">H", streamId)
packet += struct.pack(">L", 0)
packet += struct.pack(">H", len(payload))
packet += payload
You have to prepend > to each letter so everything is big-endian.
#!/usr/bin/env python2
import struct
relayCmd = 170
streamId = 10000
payload = "A"
packet = struct.pack(">BHHLH", relayCmd, 0, streamId, 0, len(payload)) + payload
print(''.join("{:02x} ".format(ord(i)) for i in packet))
packet = struct.pack(">B", relayCmd)
packet += struct.pack(">H", 0)
packet += struct.pack(">H", streamId)
packet += struct.pack(">L", 0)
packet += struct.pack(">H", len(payload))
packet += payload
print(''.join("{:02x} ".format(ord(i)) for i in packet))
Related
I am new to Networking and trying to implement a network calculator using python3 where the client's responsibility is to send operands and operators and the server will calculate the result and send it back to the client. Communication is through UDP messages and I am working on client side. Each message is comprised of a header and a payload and they are described as shown in the below figures.
UDP header:
I am familiar with sending string messages using sockets but having a hard-time with how to make a message with both header and payload and how to assign the bits for various attributes or how to generate message/client id's in the header and If there is any way to automatically generate the Id's. Any help or suggestions will be highly appreciated.
Thanks in advance
I will only do a portion of your homework.
I hope it will help you to find energy to work on missing parts.
import struct
import socket
CPROTO_ECODE_REQUEST, CPROTO_ECODE_SUCCESS, CPROTO_ECODE_FAIL = (0,1,2)
ver = 1 # version of protocol
mid = 0 # initial value
cid = 99 # client Id (arbitrary)
sock = socket.socket( ...) # to be customized
def sendRecv( num1, op, num2):
global mid
ocs = ("+", "-", "*", "/").index( op)
byte0 = ver + (ocs << 3) + (CPROTO_ECODE_REQUEST << 6)
hdr = struct.pack( "!BBH", byte0, mid, cid)
parts1 = (b'0000' + num1.encode() + b'0000').split(b'.')
parts2 = (b'0000' + num2.encode() + b'0000').split(b'.')
msg = hdr + parts1[0][-4:] + parts1[1][:4] + parts2[0][-4:] + parts2[1][:4]
socket.send( msg) # send request
bufr = socket.recv( 512) # get answer
# to do:
# complete socket_send and socket.recv
# unpack bufr into: verr,ecr,opr,value_i, value_f
# verify that verr, ecr, opr, are appropriate
# combine value_i and value_f into answer
mid += 1
return answer
result = sendRecv( '2.47', '+', '46.234')
There are many elements that haven't be specified by your teacher:
what should be the byte-ordering on the network (bigEndian or littleEndian)? The above example suppose it's bigEndian but you can easily modify the 'pack' statement to use littleEndian.
What should the program do if the received packet header is invalid?
What should the program do if there's no answer from server?
Payload: how should we interpret "4 most significant digits of fraction"? Does that mean that the value is in ASCII? That's not specified.
Payload: assuming the fraction is in ASCII, should it be right-justified or left-justified in the packet?
Payload: same question for integer portion.
Payload: if the values are in binary, are they signed or unsigned. It will have an affect on the unpacking statement.
In the program above, I assumed that:
values are positive and in ASCII (without sign)
integer portion is right-justified
fractional portion is left justified
Have fun!
I would like to send fragmented packets size of 8 bytes and a random starting offset. Also want to leave out the last fragmented packet.
So far I got everything except the fragment of
from scapy.all import *
from random import randint
dip="MY.IP.ADD.RESS"
payload="A"*250+"B"*500
packet=IP(dst=dip,id=12345,off=123)/UDP(sport=1500,dport=1501)/payload
frags=fragment(packet,fragsize=8)
print(packet.show())
for f in frags:
send(f)
What does the above code do?
It sends IP Fragment Packets size of 8 byte to a destination IP address.
I would like to send IP Fragment Packets with a random Frag Offset.
I can't find anything about fragment() and the only field, I was able to edit was in IP packet instead of each fragmented IP packet.
Does someone have an idea to accomplish this?
Infos: Python2.7, latest version of scapy (pip)
If you want to generate "broken" fragment offset fields, you have to do that yourself. The scapy fragment() function is simple enough:
def fragment(pkt, fragsize=1480):
"""Fragment a big IP datagram"""
fragsize = (fragsize + 7) // 8 * 8
lst = []
for p in pkt:
s = raw(p[IP].payload)
nb = (len(s) + fragsize - 1) // fragsize
for i in range(nb):
q = p.copy()
del(q[IP].payload)
del(q[IP].chksum)
del(q[IP].len)
if i != nb - 1:
q[IP].flags |= 1
q[IP].frag += i * fragsize // 8 # <---- CHANGE THIS
r = conf.raw_layer(load=s[i * fragsize:(i + 1) * fragsize])
r.overload_fields = p[IP].payload.overload_fields.copy()
q.add_payload(r)
lst.append(q)
return lst
Source: https://github.com/secdev/scapy/blob/652b77bf12499451b47609b89abc663aa0f69c55/scapy/layers/inet.py#L891
If you change the marked code line above, you can set the fragment offset to whatever you want.
Specifically in Python 2.4, which is unfortunately old, I need to convert a length into hex value. Length of 1 would be '\x00\x01' while a length of 65535 would be '\xFF\xFF'.
import struct
hexdict = {'0':'\x00\x00', '1':'\x00\x01', '2':'\x00\x02', '3':'\x00\x03', '4':'\x00\x04', '5':'\x00\x05', '6':'\x00\x06', '7':'\x00\x07', '8':'\x00\x08', '9':'\x00\x09', 'a':'\x00\x0a', 'b':'\x00\x0b', 'c':'\x00\x0c', 'd':'\x00\x0d', 'e':'\x00\x0e', 'f':'\x00\x0f'}
def convert(int_value): # Not in original request
encoded = format(int_value, 'x')
length = len(encoded)
encoded = encoded.zfill(length+length%2)
retval = encoded.decode('hex')
if x < 256:
retval = '\x00' + retval
return retval
for x in range(16):
print hexdict[str(hex(x)[-1])] # Original, terrible method
print convert(x) # Slightly better method
print struct.pack(">H", x) # Best method
Aside from having a dictionary like above, how can I convert an arbitrary number <= 65535 into this hex string representation, filling 2 bytes of space?
Thanks to Linuxios and an answer I found while waiting for that answer, I have found three methods to do this. Obviously, Linuxios' answer is the best, unless for some reason importing struct is not desired.
Using Python's built-in struct package:
import struct
struct.pack(">H", x)
For example, struct.pack(">H", 1) gives '\x00\x01' and struct.pack(">H", 65535) gives '\xff\xff'.
With background knowledge of C I want to serialize an integer number to 3 bytes. I searched a lot and found out I should use struct packing. I want something like this:
number = 1195855
buffer = struct.pack("format_string", number)
Now I expect buffer to be something like ['\x12' '\x3F' '\x4F']. Is it also possible to set endianness?
It is possible, using either > or < in your format string:
import struct
number = 1195855
def print_buffer(buffer):
print(''.join(["%02x" % ord(b) for b in buffer])) # Python 2
#print(buffer.hex()) # Python 3
# Little Endian
buffer = struct.pack("<L", number)
print_buffer(buffer) # 4f3f1200
# Big Endian
buffer = struct.pack(">L", number)
print_buffer(buffer) # 00123f4f
2.x docs
3.x docs
Note, however, that you're going to have to figure out how you want to get rid of the empty byte in the buffer, since L will give you 4 bytes and you only want 3.
Something like:
buffer = struct.pack("<L", number)
print_buffer(buffer[:3]) # 4f3f12
# Big Endian
buffer = struct.pack(">L", number)
print_buffer(buffer[-3:]) # 123f4f
would be one way.
Another way is to manually pack the bytes:
>>> import struct
>>> number = 1195855
>>> data = struct.pack('BBB',
... (number >> 16) & 0xff,
... (number >> 8) & 0xff,
... number & 0xff,
... )
>>> data
b'\xa5Z'
>>> list(data)
[18, 63, 79]
As just the 3-bytes, it's a bit redundant since the last 3 parameters of struct.pack equals the data. But this worked well in my case because I had header and footer bytes surrounding the unsigned 24-bit integer.
Whether this method, or slicing is more elegant is up to your application. I found this was cleaner for my project.
I want to write a Python program that makes PNG files. My big problem is with generating the CRC and the data in the IDAT chunk. Python 2.6.4 does have a zlib module, but there are extra settings needed. The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module.
As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification.
Note that I can generate the rest of the PNG file and the data that is to be compressed for the IDAT chunk, I just don't know how to properly compress the image data for the IDAT chunk after implementing the initial filtering step.
EDITED:
The problem with PyPNG is that it will not write tEXt chunks. A minor annoyance is that one has to manipulate the image as (R, G, B) data; I'd prefer to manipulate palette values of the pixels directly and then define the associations between palette values and color data. I'm also left unsure if PyPNG takes advantage of the "compression" allowed by using 1-, 2-, and 4- bit palette values in the image data to fit more than one pixel in a byte.
Even if you can't use PyPNG for the tEXt chunk reason, you can use its code! (it's MIT licensed). Here's how a chunk is written:
def write_chunk(outfile, tag, data=''):
"""
Write a PNG chunk to the output file, including length and
checksum.
"""
# http://www.w3.org/TR/PNG/#5Chunk-layout
outfile.write(struct.pack("!I", len(data)))
outfile.write(tag)
outfile.write(data)
checksum = zlib.crc32(tag)
checksum = zlib.crc32(data, checksum)
outfile.write(struct.pack("!i", checksum))
Note the use of zlib.crc32 to create the CRC checksum, and also note how the checksum runs over both the tag and the data.
For compressing the IDAT chunks you basically just use zlib. As others have noted the adler checksum and the default window size are all okay (by the way the PNG spec does not require a window size of 32768, it requires that the window is at most 32768 bytes; this is all a bit odd, because in any case 32768 is the maximum window size permitted by the current version of the zlib spec).
The code to do this in PyPNG is not particular great, see the write_passes() function. The bit that actually compresses the data and writes a chunk is this:
compressor = zlib.compressobj()
compressed = compressor.compress(tostring(data))
if len(compressed):
# print >> sys.stderr, len(data), len(compressed)
write_chunk(outfile, 'IDAT', compressed)
PyPNG never uses scanline filtering. Partly this is because it would be very slow in Python, partly because I haven't written the code. If you have Python code to do filtering, it would be a most welcome contribution to PyPNG. :)
Short answer: (1) "deflate" and "32Kb window" are the defaults (2) uses adler32 not crc32
Long answer:
""" The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module. """
You don't need to set them. Those are the defaults.
If you really want to specify non-default arguments to zlib, you can use zlib.compressobj() ... it has several args that are not documented in the Python docs. Reading material:
source: Python's gzip.py (see how it calls zlib.compressobj)
source: Python's zlibmodule.c (see its defaults)
SO: This question (see answers by MizardX and myself, and comments on each)
docs: The manual on the zlib site
"""As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification."""
Please check out the zlib specification aka RFC 1950 ... it says that the checksum used is adler32
The zlib compress or compressobj output will include the appropriate CRC; why do you think that you will need to do it yourself?
Edit So you do need a CRC-32. Good news: zlib.crc32() will do the job:
Code:
import zlib
crc_table = None
def make_crc_table():
global crc_table
crc_table = [0] * 256
for n in xrange(256):
c = n
for k in xrange(8):
if c & 1:
c = 0xedb88320L ^ (c >> 1)
else:
c = c >> 1
crc_table[n] = c
make_crc_table()
"""
/* Update a running CRC with the bytes buf[0..len-1]--the CRC
should be initialized to all 1's, and the transmitted value
is the 1's complement of the final running CRC (see the
crc() routine below)). */
"""
def update_crc(crc, buf):
c = crc
for byte in buf:
c = crc_table[int((c ^ ord(byte)) & 0xff)] ^ (c >> 8)
return c
# /* Return the CRC of the bytes buf[0..len-1]. */
def crc(buf):
return update_crc(0xffffffffL, buf) ^ 0xffffffffL
if __name__ == "__main__":
tests = [
"",
"\x00",
"\x01",
"Twas brillig and the slithy toves did gyre and gimble in the wabe",
]
for test in tests:
model = crc(test) & 0xFFFFFFFFL
zlib_result = zlib.crc32(test) & 0xFFFFFFFFL
print (model, zlib_result, model == zlib_result)
Output from Python 2.7 is below. Also tested with Python 2.1 to 2.6 inclusive and 1.5.2 JFTHOI.
(0L, 0L, True)
(3523407757L, 3523407757L, True)
(2768625435L, 2768625435L, True)
(4186783197L, 4186783197L, True)
Don't you want to use some existing software to generate your PNGs? How about PyPNG?
There are libraries that can write PNG files for you, such as PIL. That will be easier and faster, and as an added bonus you can read and write tons of formats.
It looks like you will have to resort to call zlib "by hand" using ctypes --
It is not that hard:
>>> import ctypes
>>> z = ctypes.cdll.LoadLibrary("libzip.so.1")
>>> z.zlibVersion.restype=ctypes.c_char_p
>>> z.zlibVersion()
'1.2.3'
You can check the zlib library docmentation here: http://zlib.net/manual.html
The zlib.crc32 works fine, and the zlib compressor has correct defaults for png generation.
For the casual reader who looks for png generation from Python code, here is a complete example that you can use as a starter for your own png generator code - all you need is the standard zlib module and some bytes-encoding:
#! /usr/bin/python
""" Converts a list of list into gray-scale PNG image. """
__copyright__ = "Copyright (C) 2014 Guido Draheim"
__licence__ = "Public Domain"
import zlib
import struct
def makeGrayPNG(data, height = None, width = None):
def I1(value):
return struct.pack("!B", value & (2**8-1))
def I4(value):
return struct.pack("!I", value & (2**32-1))
# compute width&height from data if not explicit
if height is None:
height = len(data) # rows
if width is None:
width = 0
for row in data:
if width < len(row):
width = len(row)
# generate these chunks depending on image type
makeIHDR = True
makeIDAT = True
makeIEND = True
png = b"\x89" + "PNG\r\n\x1A\n".encode('ascii')
if makeIHDR:
colortype = 0 # true gray image (no palette)
bitdepth = 8 # with one byte per pixel (0..255)
compression = 0 # zlib (no choice here)
filtertype = 0 # adaptive (each scanline seperately)
interlaced = 0 # no
IHDR = I4(width) + I4(height) + I1(bitdepth)
IHDR += I1(colortype) + I1(compression)
IHDR += I1(filtertype) + I1(interlaced)
block = "IHDR".encode('ascii') + IHDR
png += I4(len(IHDR)) + block + I4(zlib.crc32(block))
if makeIDAT:
raw = b""
for y in xrange(height):
raw += b"\0" # no filter for this scanline
for x in xrange(width):
c = b"\0" # default black pixel
if y < len(data) and x < len(data[y]):
c = I1(data[y][x])
raw += c
compressor = zlib.compressobj()
compressed = compressor.compress(raw)
compressed += compressor.flush() #!!
block = "IDAT".encode('ascii') + compressed
png += I4(len(compressed)) + block + I4(zlib.crc32(block))
if makeIEND:
block = "IEND".encode('ascii')
png += I4(0) + block + I4(zlib.crc32(block))
return png
def _example():
with open("cross3x3.png","wb") as f:
f.write(makeGrayPNG([[0,255,0],[255,255,255],[0,255,0]]))