I am currently working on making a CAN tracer with Python. The connection as well as the received data from CAN work. My question now is: how can I change the PDO mapping and stop and start the transmission via CAN like it works with CANopen?
import canopen
# CAN Setting
can_interface = '0'
can_filters = [{"can_id":0x018A, "can_mask": 0xFFFF, "extended": True}]
bus = can.interface.Bus(can_interface, bustype='ixxat',can_filters=can_filters)
while True:
message = bus.recv()
print(message)
You should be in pre operational mode and send SDO request to change PDO mapping.
You can get these information in the documentation or in the EDS file.
If doing modification do not forget to send the save SDO request at the end else the node will restart with its default value
Related
I'm using https://github.com/FreeOpcUa/opcua-asyncio and wondering how to receive the ValueRank and ArrayDimensions (at runtime) if the server sets ValueRank to Any (-2) and does not set ArrayDimensions.
As I understood from a Rockwell document (https://literature.rockwellautomation.com/idc/groups/literature/documents/wp/opcua-wp001_-en-e.pdf) the data I receive in read_value() should be self-describing.
I wonder how I can receive that information in opcua-asyncio.
Its possible:
var: Variant = await n.read_data_type_as_variant_type()
if var.is_array:
demensions = var.Dimension # Optional[List[Int32]]
I'm trying to understand if grpc server using streams is able to wait for all client messages to be read in prior to sending responses.
I have a trivial application where I send in several numbers I'd like to add and return.
I've set up a basic proto file to test this:
syntax = "proto3";
message CalculateRequest{
int64 x = 1;
int64 y = 2;
};
message CalculateReply{
int64 result = 1;
}
service Svc {
rpc CalculateStream (stream CalculateRequest) returns (stream CalculateReply);
}
On my server-side I have implemented the following code which returns the answer message as the message is received:
class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
def CalculateStream(self, request_iterator, context):
for request in request_iterator:
resultToOutput = request.x + request.y
yield contracts_pb2.CalculateReply(result=resultToOutput)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
contracts_pb2_grpc.add_SvcServicer_to_server(
CalculatorServicer(), server)
server.add_insecure_port('localhost:9000')
server.start()
server.wait_for_termination()
if __name__ == '__main__':
print( "We're up")
logging.basicConfig()
serve()
I'd like to tweak this to first read in all the numbers and then send these out at a later stage - something like the following:
class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
listToReturn = []
def CalculateStream(self, request_iterator, context):
for request in request_iterator:
listToReturn.append (request.x + request.y)
# ...
# do some other stuff first before returning
for item in listToReturn:
yield contracts_pb2.CalculateReply(result=resultToOutput)
Currently, my implementation to write out later doesn't work as the code at the bottom is never reached. Is this by design that the connection seems to "close" before reaching there?
The grpc.io website suggests that this should be possible with BiDirectional streaming:
for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes.
Thanks in advance for any help :)
The issue here is the definition of "all client messages." At the transport level, the server has no way of knowing whether the client has finished independent of the client closing its connection.
You need to add some indication of the client's having finished sending requests to the protocol. Either add a bool field to the existing CalculateRequest or add a top-level oneof with one of the options being something like a StopSendingRequests
I have created a proxy between QGC(Ground Control Station) and vehicle in Python. Here is the code:
gcs_conn = mavutil.mavlink_connection('tcpin:localhost:15795')
gcs_conn.wait_heartbeat()
print("Heartbeat from system (system %u component %u)" %(gcs_conn.target_system, gcs_conn.target_system))
vehicle = mavutil.mavlink_connection('tcp:localhost:5760')
vehicle.wait_heartbeat() # recieving heartbeat from the vehicle
print("Heartbeat from system (system %u component %u)" %(vehicle.target_system, vehicle.target_system))
while True:
gcs_msg = gcs_conn.recv_match()
if gcs_msg == None:
pass
else:
vehicle.mav.send(gcs_msg)
print(gcs_msg)
vcl_msg = vehicle.recv_match()
if vcl_msg == None:
pass
else:
gcs_conn.mav.send(vcl_msg)
print(vcl_msg)
I need to receive the messages from the QGC and then forward them to the vehicle and also receive the messages from the vehicle and forward them to the QGC.
When I run the code I get this error.
is there any one who can help me?
If you print your message before sending you'll notice it always fails when you try to send a BAD_DATA message type.
So this should fix it (same for vcl_msg):
if gcs_msg and gcs_msg.get_type() != 'BAD_DATA':
vehicle.mav.send(gcs_msg)
PD: I noticed that you don't specify tcp as input or output, it defaults to input. Than means both connections are inputs. I recommend setting up the GCS connection as output:
gcs_conn = mavutil.mavlink_connection('tcp:localhost:15795', input=False)
https://mavlink.io/en/mavgen_python/#connection_string
For forwarding MAVLink successfully a few things need to happen. I'm assuming you need a usable connection to a GCS, like QGroundControl or MissionPlanner. I use QGC, and my design has basic testing with it.
Note that this is written with Python3. This snippet is not tested, but I have a (much more complex) version tested and working.
from pymavlink import mavutil
import time
# PyMAVLink has an issue that received messages which contain strings
# cannot be resent, because they become Python strings (not bytestrings)
# This converts those messages so your code doesn't crash when
# you try to send the message again.
def fixMAVLinkMessageForForward(msg):
msg_type = msg.get_type()
if msg_type in ('PARAM_VALUE', 'PARAM_REQUEST_READ', 'PARAM_SET'):
if type(msg.param_id) == str:
msg.param_id = msg.param_id.encode()
elif msg_type == 'STATUSTEXT':
if type(msg.text) == str:
msg.text = msg.text.encode()
return msg
# Modified from the snippet in your question
# UDP will work just as well or better
gcs_conn = mavutil.mavlink_connection('tcp:localhost:15795', input=False)
gcs_conn.wait_heartbeat()
print(f'Heartbeat from system (system {gcs_conn.target_system} component {gcs_conn.target_system})')
vehicle = mavutil.mavlink_connection('tcp:localhost:5760')
vehicle.wait_heartbeat()
print(f'Heartbeat from system (system {vehicle.target_system} component {vehicle.target_system})')
while True:
# Don't block for a GCS message - we have messages
# from the vehicle to get too
gcs_msg = gcs_conn.recv_match(blocking=False)
if gcs_msg is None:
pass
elif gcs_msg.get_type() != 'BAD_DATA':
# We now have a message we want to forward. Now we need to
# make it safe to send
gcs_msg = fixMAVLinkMessageForForward(gcs_msg)
# Finally, in order to forward this, we actually need to
# hack PyMAVLink so the message has the right source
# information attached.
vehicle.mav.srcSystem = gcs_msg.get_srcSystem()
vehicle.mav.srcComponent = gcs_msg.get_srcComponent()
# Only now is it safe to send the message
vehicle.mav.send(gcs_msg)
print(gcs_msg)
vcl_msg = vehicle.recv_match(blocking=False)
if vcl_msg is None:
pass
elif vcl_msg.get_type() != 'BAD_DATA':
# We now have a message we want to forward. Now we need to
# make it safe to send
vcl_msg = fixMAVLinkMessageForForward(vcl_msg)
# Finally, in order to forward this, we actually need to
# hack PyMAVLink so the message has the right source
# information attached.
gcs_conn.mav.srcSystem = vcl_msg.get_srcSystem()
gcs_conn.mav.srcComponent = vcl_msg.get_srcComponent()
gcs_conn.mav.send(vcl_msg)
print(vcl_msg)
# Don't abuse the CPU by running the loop at maximum speed
time.sleep(0.001)
Notes
Make sure your loop isn't being blocked
The loop must quickly check if a message is available from one connection or the other, instead of waiting for a message to be available from a single connection. Otherwise a message on the other connection will not go through until the blocking connection has a message.
Check message validity
Check that you actually got a valid message, as opposed to a BAD_DATA message. Attempting to send BAD_DATA will crash
Make sure the recipient gets the correct information about the sender
By default PyMAVLink, when sending a message, will encode YOUR system and component IDs (usually left at zero), instead of the IDs from the message. A GCS receiving this may be confused (ie, QGC) and not properly connect to the vehicle (despite showing the messages in MAVLink inspector).
This is fixed by hacking PyMAVLink such that your system and component IDs match the forwarded message. This can be revered after the message is sent if necessary. See the example to see how I did it.
Loop update rate
It's important that the update rate is fast enough to handle high traffic conditions (especially, say, for downloading params), but it shouldn't peg out the CPU either. I find that a 1000hz update rate works well enough.
I use python and paho.mqtt for sending messages to cloud
I set up endpoint and route. When I set query string to true, everything works fine
messageDict = {}
systemPropertiesDict = {"contentType": "application/json", "contentEncoding": "utf-8", "iothub-message-source": "deviceMessages", "iothub-enqueuedtime": "2017-05-08T18:55:31.8514657Z"}
messageDict = {"systemProperties": systemPropertiesDict}
messageDict["appProperties"] = {}
body = '{id:1}'
messageDict["body"] = body
root = {"message":messageDict}
msg = json.dumps(root, indent=2).encode('utf-8')
print("Message to send", msg)
self.client.publish(topicName, msg)
But if I set the query string to $body.id = 1, then I don't receive any messages.
Any ideas, guys?
The route not working because the content encoding type is not set. All the "systemProperties" in your code actually as message body not system properties. Content encoding type set by this method doesn't take effect.
Add "$.ct=application%2Fjson&$.ce=utf-8" to the topic. Then it will look like this:
devices/{yourDeviceId}/messages/events/$.ct=application%2Fjson&$.ce=utf-8
But to make the route query works on your message you need use this query string: $body.message.body.id = 1
Two edits to make:
First, change body = '{id:1}' to body = {"id":1} to make the id as a string.
Second, change topicName value to this one:
devices/{yourDeviceId}/messages/events/$.ct=application%2Fjson&$.ce=utf-8
If possible, it is suggesting to use Azure IoT SDK for Python to communicate with Azure IoT Hub.
If you are using a third-party library (like paho-mqtt) you have to specify the content type and the encoding of the message.
IoT Hub route messages with content type "application/json" and content encoding: "utf-8" or "utf-16" or "utf-32".
Using MQTT protocol you can set this info with $.ct and $.ce.
topic example:
devices/{MY_DEVICE_ID}/messages/events/%24.ct=application%2fjson&%24.ce=utf-8
URL encoding of
devices/{MY_DEVICE_ID}/messages/events/$.ct=application/json&$.ce=utf-8
Here you could found more info:
https://azure.microsoft.com/it-it/blog/iot-hub-message-routing-now-with-routing-on-message-body/
I recently have acquired an ACS Linear Actuator (Tolomatic Stepper) that I am attempting to send data to from a Python application. The device itself communicates using Ethernet/IP protocol.
I have installed the library cpppo via pip. When I issue a command
in an attempt to read status of the device, I get None back. Examining the
communication with Wireshark, I see that it appears like it is
proceeding correctly however I notice a response from the device indicating:
Service not supported.
Example of the code I am using to test reading an "Input Assembly":
from cpppo.server.enip import client
HOST = "192.168.1.100"
TAGS = ["#4/100/3"]
with client.connector(host=HOST) as conn:
for index, descr, op, reply, status, value in conn.synchronous(
operations=client.parse_operations(TAGS)):
print(": %20s: %s" % (descr, value))
I am expecting to get a "input assembly" read but it does not appear to be
working that way. I imagine that I am missing something as this is the first
time I have attempted Ethernet/IP communication.
I am not sure how to proceed or what I am missing about Ethernet/IP that may make this work correctly.
clutton -- I'm the author of the cpppo module.
Sorry for the delayed response. We only recently implemented the ability to communicate with simple (non-routing) CIP devices. The ControlLogix/CompactLogix controllers implement an expanded set of EtherNet/IP CIP capability, something that most simple CIP devices do not. Furthermore, they typically also do not implement the *Logix "Read Tag" request; you have to struggle by with the basic "Get Attribute Single/All" requests -- which just return raw, 8-bit data. It is up to you to turn that back into a CIP REAL, INT, DINT, etc.
In order to communicate with your linear actuator, you will need to disable these enhanced encapsulations, and use "Get Attribute Single" requests. This is done by specifying an empty route_path=[] and send_path='', when you parse your operations, and to use cpppo.server.enip.getattr's attribute_operations (instead of cpppo.server.enip.client's parse_operations):
from cpppo.server.enip import client
from cpppo.server.enip.getattr import attribute_operations
HOST = "192.168.1.100"
TAGS = ["#4/100/3"]
with client.connector(host=HOST) as conn:
for index, descr, op, reply, status, value in conn.synchronous(
operations=attribute_operations(
TAGS, route_path=[], send_path='' )):
print(": %20s: %s" % (descr, value))
That should do the trick!
We are in the process of rolling out a major update to the cpppo module, so clone the https://github.com/pjkundert/cpppo.git Git repo, and checkout the feature-list-identity branch, to get early access to much better APIs for accessing raw data from these simple devices, for testing. You'll be able to use cpppo to convert the raw data into CIP REALs, instead of having to do it yourself...
...
With Cpppo >= 3.9.0, you can now use much more powerful cpppo.server.enip.get_attribute 'proxy' and 'proxy_simple' interfaces to routing CIP devices (eg. ControlLogix, Compactlogix), and non-routing "simple" CIP devices (eg. MicroLogix, PowerFlex, etc.):
$ python
>>> from cpppo.server.enip.get_attribute import proxy_simple
>>> product_name, = proxy_simple( '10.0.1.2' ).read( [('#1/1/7','SSTRING')] )
>>> product_name
[u'1756-L61/C LOGIX5561']
If you want regular updates, use cpppo.server.enip.poll:
import logging
import sys
import time
import threading
from cpppo.server.enip import poll
from cpppo.server.enip.get_attribute import proxy_simple as device
params = [('#1/1/1','INT'),('#1/1/7','SSTRING')]
# If you have an A-B PowerFlex, try:
# from cpppo.server.enip.ab import powerflex_750_series as device
# parms = [ "Motor Velocity", "Output Current" ]
hostname = '10.0.1.2'
values = {} # { <parameter>: <value>, ... }
poller = threading.Thread(
target=poll.poll, args=(device,), kwargs={
'address': (hostname, 44818),
'cycle': 1.0,
'timeout': 0.5,
'process': lambda par,val: values.update( { par: val } ),
'params': params,
})
poller.daemon = True
poller.start()
# Monitor the values dict (updated in another Thread)
while True:
while values:
logging.warning( "%16s == %r", *values.popitem() )
time.sleep( .1 )
And, Voila! You now have regularly updating parameter names and values in your 'values' dict. See the examples in cpppo/server/enip/poll_example*.py for further details, such as how to report failures, control exponential back-off of connection retries, etc.
Version 3.9.5 has recently been released, which has support for writing to CIP Tags and Attributes, using the cpppo.server.enip.get_attribute proxy and proxy_simple APIs. See cpppo/server/enip/poll_example_many_with_write.py
hope this is obvious, but accessing HOST = "192.168.1.100" will only be possible from a system located on the subnet 192.168.1.*