How to hash a CAN message? - python

I want to Hash a CAN message received from a vehicle. The following code written in Python is used to receive the CAN message(dev.recv()) from the vehicle and print the received message (dev.send()).I want to hash the CAN message present in dev.recv()function before sending the message using dev.send().Is this possible? If so how can it be done?
from canard.hw import socketcan
dev = socketcan.SocketCanDev(’can0’)
dev.start()
while True:
f = dev.recv()
dev.send(f)
`

I am not sure what the data type is for "f", the data you receive from the function recv.
I am guessing that SocketCanDev is just a wrapper for the device, and recv acts very similar to the function, read().
So, "f" in your code might be interpreted as an array of bytes, or chars.
Hashing is done to an array of bytes, regardless of the format of the
string.
And, the result of the hashing does not depend on the input format or data type.
Therefore, in your case,
while True:
f = dev.recv()
result = (hashFunction)(f)
dev.send(result) // result should in the data type that the function send can accept as a parameter
(hashFunction) can be replaced with an actual function from a hashing library, such as "hashlib".

If you are interested in cryptographic hashing, you should take a look at hashlib
With it you should be able to hash the message and send the hash like this
H = hashlib.new('sha256', message)
dev.send(H.digest())
If you want to still send the original message besides the hash, you could make two calls to send.

Related

Consume web service having Byte64 array as parameter with python Zeep

I'm trying to consume a webservice with python Zeep that has a parameter of type xsd:base64Binary technical document specify type as: Byte[]
Errors are:
urllib3.exceptions.HeaderParsingError: [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
and on the reply I get: Generic error "data at the root level is invalid.
I can't find the correct way to do it.
My code is:
content=open(fileName,"r").read()
encodedContent = base64.b64encode(content.encode('ascii'))
myParameter=dict(param=dict(XMLFile=encodedContent))
client.service.SendFile(**myParameter)
thanks everyone for the comments.
Mike
This is how the built-in type of Base64Binary looks like in zeep:
class Base64Binary(BuiltinType):
accepted_types = [str]
_default_qname = xsd_ns("base64Binary")
#check_no_collection
def xmlvalue(self, value):
return base64.b64encode(value)
def pythonvalue(self, value):
return base64.b64decode(value)
As you can see, it's doing the encoding and decoding by itself. You don't need to encode the file content, you have to send it as it is and zeep will encode it before putting it on the wire.
Most likely this is causing the issue. When the message element is decoded, an array of bytes is expected but another base64 string is found there.

Send already serialized message inside message

I'm using Protobuf with the C++ API and I have a standart message I send between 2 different softwares and I want to add a raw nested message as data.
So I added a message like this:
Message main{
string id=1;
string data=2;
}
I tried to serialize some nested messages I made to a string and send it as "data" with "main" message but it doesn't work well on the parser side.
How can I send nested serialized message inside a message using c++ and python api.
Basically, use bytes:
message main {
string id=1;
bytes data=2;
}
In addition to not corrupting the data (string is strictly UTF-8), as long as the payload is a standard message, this is also compatible with changing it later (at either end, or both) to the known type:
``` proto
message main {
string id=1;
TheOtherMessageType data=2;
}
message TheOtherMessageType {...}
(or even using both versions at different times depending on which is most convenient)

Differentiating between binary encoded Avro and JSON messages

I'm using python to read messages coming from various topics. Some topics have got their messages encoded in plain JSON, while others are using Avro binary serialization, with confluent schema registry.
When I receive a message I need to know if it has to be decoded. At the moment I'm only relying on the fact that the binary encoded messages are starting with a MAGIC_BYTE which value is zero:
from confluent_kafka.cimpl import Consumer
consumer = Consumer(config)
consumer.subsrcibe(...)
msg = consumer.poll()
# check the msg is not null or error etc
if msg.values()[0] == 0:
# It is binary encoded
else:
# It is json
I was wondering is there's a better way to do that?
You could get bytes 0-5 of your message, then
magic_byte = message_bytes[0]
schema_id = message_bytes[1:5]
Then, perform a lookup against your registry for GET /schemas/{schema_id}, and cache the ID + schema (if needed) when you get a 200 response code.
Otherwise, the message is either JSON, or the producer had sent its data to a different registry (if there is more than one in your environment). Note: this means the data could still be Avro
You can simply query the schema registry through REST first, and build a local cache of the topics that are registered there. Then, when you're trying to decode a message from a particular topic, simply compare the topic to the contents of that list. If it's there, you know it has be decoded.
Of course, this only works if all the topics that are Avro encoded are using Schema Registry. If you ever receive an Avro-encoded message that is not registered with Schema Registry, then it won't work.

how can I parse prefixed length message from TCP stream in twisted python?

I'm coding to tcp client/server using python twisted
in order to replace for Java or C#.
I have to parse length prefixed string messages based on ANS(alpha numeric string) in connected permanent session.
like this :
message format : [alpha numeric string:4byte][message data]
example-1 : 0004ABCD ==> ABCD
example-2 : 0002AB0005HELLO ==> AB, HELLO
it can't be solved by IntNProtocol, NetStringProtocol.
And if a client send a 2kb message from application layer, the kernel split message data by MSS(maximum segment size) and send packets are splitted.
in TCP send/receive environment, it often raise like this :
example : 1000HELLO {not yet arrived 995 byte data}
so it has to wait for receiving spare data using array, queue...
in the twisted, I don't know how to parse multiple large-message.
Anybody help me to give some information or URL?
class ClientProtocol(protocol.Protocol):
def dataReceived(self, data):
# how can I code to parse multiple large message?
# is there solution to read specific size for data ?
It looks like you can implement this protocol using StatefulProtocol as a base. Your protocol basically has two states. In the first state, you're waiting for 4 bytes which you will interpret as a zero-padded base 10 integer. In the second state, you're waiting for a number of bytes equal to the integer read in the first state.
from twisted.protocols.stateful import StatefulProtocol
class ANSProtocol(StatefulProtocol):
def getInitialState(self):
return (self._state_length, 4)
def _state_length(self, length_bytes):
length = int(length_bytes)
return self._state_content, length
def _state_content(self, content):
self.application_logic(content)
return self.getInitialState()
def application_logic(self, content):
# Application logic operating on `content`
# ...

Python sending mutliple strings using socket.send() / socket.recv()

I am trying to send multiple strings using the socket.send() and socket.recv() function.
I am building a client/server, and "in a perfect world" would like my server to call the socket.send() function a maximum of 3 times (for a certain server command) while having my client call the socket.recv() call 3 times. This doesn't seem to work the client gets hung up waiting for another response.
server:
clientsocket.send(dir)
clientsocket.send(dirS)
clientsocket.send(fileS)
client
response1 = s.recv(1024)
if response1:
print "\nRecieved response: \n" , response1
response2 = s.recv(1024)
if response2:
print "\nRecieved response: \n" , response2
response3 = s.recv(1024)
if response3:
print "\nRecieved response: \n" , response3
I was going through the tedious task of joining all my strings together then reparsing them in the client, and was wondering if there was a more efficient way of doing it?
edit:
My output of response1 gives me unusual results. The first time I print response1, it prints all 3 of the responses in 1 string (all mashed together). The second time I call it, it gives me the first string by itself. The following calls to recv are now glitched/bugged and display the 2nd string, then the third string. It then starts to display the other commands but as if it was behind and in a queue.
Very unusual, but I will likely stick to joining the strings together in the server then parsing them in the client
You wouldn't send bytes/strings over a socket like that in a real-world app.
You would create a messaging protocol on-top of the socket, then you would put your bytes/strings in messages and send messages over the socket.
You probably wouldn't create the messaging protocol from scratch either. You'd use a library like nanomsg or zeromq.
server
from nanomsg import Socket, PAIR
sock = Socket(PAIR)
sock.bind('inproc://bob')
sock.send(dir)
sock.send(dirS)
sock.send(fileS)
client
from nanomsg import Socket, PAIR
sock = Socket(PAIR)
sock.bind('inproc://bob')
response1 = sock.recv()
response2 = sock.recv()
response3 = sock.recv()
In nanomsg, recv() will return exactly what was sent by send() so there is a one-to-one mapping between send() and recv(). This is not the case when using lower-level Python sockets where you may need to call recv() multiple times to get everything that was sent with send().
TCP is a streaming protocol and there are no message boundaries. Whether a blob of data was sent with one or a hundred send calls is unknown to the receiver. You certainly can't assume that 3 sends can be matched with 3 recvs. So, you are left with the tedious job of reassembling fragments at the receiver.
One option is to layer a messaging protocol on top of the pure TCP stream. This is what zeromq does, and it may be an option for reducing the tedium.
The answer to this has been covered elsewhere.
There are two solutions to your problem.
Solution 1:
Mark the end of your strings. send(escape(dir) + MARKER) Your client then keeps calling recv() until it gets the end-of-message marker. If recv() returns multiple strings, you can use the marker to know where they start and end. You need to escape the marker if your strings contain it. Remember to escape on the client too.
Solution 2:
Send the length of your strings before you send the actual string. Your client then keeps calling recv() until its read all the bytes. If recv() returns multiple strings. You know where they start and end since you know how long they are. When sending the length of your string, make you you use a fixed number of bytes so you can distinguish the string lenght from the string in the byte stream. You will find struct module useful.

Categories

Resources