I am trying to read registry information from a modbus device using MinimalModbus. However, every time I attempt to read registry 40003 which has a value of 220 I receive this error:
raise ValueError('The slave is indicating an error. The response is: {!r}'.format(response))
ValueError: The slave is indicating an error. The response is: '\x01\x83\x02Àñ'
I know there is a value in 40003 and I am following the communication documents for the device. Here is my code:
import minimalmodbus
import serial
gas = minimalmodbus.Instrument('COM5', 1)
gas.serial.baudrate = 9600
gas.serial.bytesize = 8
gas.serial.parity = serial.PARITY_NONE
gas.serial.stopbits = 1
gas.serial.timeout = 0.05
gas.mode = minimalmodbus.MODE_RTU
temp = gas.read_register(40003, 1)
print (float(temp))
I have this problem for every registry and I cannot find information regarding Àñ.
The problem was the registry number 40003. I guess the modbus protocol doesn't require the full registry number, so I changed it to temp = gas.read_register(3, 1)
Related
I am trying to read memory from a process (gameboy advance emulator) in Python using ReadProcessMemory. There is a memory viewer and I am supposed to get 81 at 0xD273 (see picture). I am new to this, I tried to do everything correctly by adding reference in the ReadProcessMemory, but there might be some things that are wrong. I am pretty sure I have the right process id since it matches the one in the task manager.
When I run my code, I get random byte values that are different everytime. 15, 255, 11, 195, but I think I should be getting 81.
I need to use python 32-bit to run the script otherwise I get error 299 (ERROR_PARTIAL_COPY).
Is there something that I'm doing wrong? I don’t specify the base address but I assumed it’s handled by the processHandle.
Here is my code and an example of the output:
result: 1, err code: 0, bytesRead: 1
data: 0000000000000015h
21
import ctypes as c
from ctypes import wintypes as w
import psutil
# Must use py -3-32 vba_script.py
vba_process_id = [p.pid for p in psutil.process_iter() if "visualboyadvance" in p.name()][0]
pid = vba_process_id # I assume you have this from somewhere.
k32 = c.WinDLL('kernel32', use_last_error=True)
OpenProcess = k32.OpenProcess
ReadProcessMemory = k32.ReadProcessMemory
CloseHandle = k32.CloseHandle
processHandle = OpenProcess(0x10, False, pid)
addr = c.c_void_p(0xD273)
dataLen = 8
data = c.c_byte()
bytesRead = c.c_byte()
result = ReadProcessMemory(processHandle, c.byref(addr), c.byref(data), c.sizeof(data), c.byref(bytesRead))
e = c.get_last_error()
print('result: {}, err code: {}, bytesRead: {}'.format(result,e,bytesRead.value))
print('data: {:016X}h'.format(data.value))
print(data.value)
CloseHandle(processHandle)
After reading #jasonharper answer, I found a way to get the actual address in the memory.
To get them, I used Cheat Engine, here was my procedure:
In Cheat Engine, search for 87949181 (hex for BRUH, which is trainer's name). Any data that you know will not change is also fine.
Find the address that really corresponds to it. Should have 0x00000050, 0x01000000, 0x0000FF99, 0x99000000, 0x00001600, 0x212D0316, DC00002D after (see picture 1). This address is a pointer. In my case, it's 0x07D2F370 (picture 2).
Double click on the address and do a pointer scan. In my case, the pointer address 0x07D2F218. This is a dynamic address that will change everytime, so you need to find the static address.
You can find that "visualboyadvance-m.exe"+02224064 -> 07D2F218. The base address is therefore 0x07D2F218 - 0x02224064 = 0x9a0000. The static address offset for the data I'm searching for is 0x02224064. The offset for the trainer's name data is 0x158.
After opening the process, to search in the memory, you can have a code like this:
base_addr = 0x9a0000 # "visualboyadvance-m.exe"
static_addr_offset = 0x02224064
address = base_addr + static_addr_offset + 0x158
k32 = c.WinDLL('kernel32', use_last_error=True)
buffer = c.create_string_buffer(buffer_size)
buffer_size=32
bytes_read = c.c_ulong(0)
if k32.ReadProcessMemory(processHandle, address, buffer, buffer_size, c.byref(bytes_read)):
data = c.c_uint32.from_buffer(buffer)
print(f"data: {data .value:X}")
This returns the right data that I'm looking for: data: 87949181.
Here are the pictures:
================================================================
================================================================
Bonus:
The base address will change if you close the game and you will need to find it back everytime. There is some way of doing way it by getting the module with the name of the process pname. You can get it easily with psutil.
import win32process
import psutils
vba_process = [p for p in psutil.process_iter() if "visualboyadvance" in p.name()][0]
pid = vba_process.pid
pname = vba_process.name
k32 = c.WinDLL('kernel32', use_last_error=True)
processHandle = k32.OpenProcess(0x10, False, pid)
modules = win32process.EnumProcessModules(processHandle)
for module in modules:
moduleFileName = win32process.GetModuleFileNameEx(processHandle, module)
if pname in moduleFileName:
base_address = module
print("Success: Got Base Address:", hex(base_address))
Success: Got Base Address: 0x9a0000
Edit: Found how to get base address of process automatically
I am working on writing a python script to load the data from Pub/Sub to BigQuery using Storage Write API's streaming method with default stream. I am trying to adapt https://github.com/googleapis/python-bigquery-storage/blob/main/samples/snippets/append_rows_proto2.py to my needs but I am running into an error
As per the google documentation, I have converted my data in ProtoBuf format for Python client.
However I am getting this error continuously while trying to run my program.
(venv) {{MY_COMPUTER}} {{FOLDER_NAME}} % python3 default_Stream.py
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google. internal.:80
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google. internal.:80
DEBUG:google.cloud.logging_v2.handlers.transports.background_thread:Background thread started.
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google. internal.:80
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google. internal.:80
Traceback (most recent call last) :
File "default_Stream.py" line 116, in <module>
append_rows_default(“{{GCLOUD_PROJECT_NAME}}, “{{BIGQUERY_DATASET_NAME}}”, “{{BIGQUERY_TABLE}}”)
File "default_ Stream.py", line 95, in append_rows default
response_future_1 = append_rows_stream. send (request)
File “{{VIRTUAL_ENVIRONMENT_PATH}}/venv/lib/python3.7/site-packages/google/cloud/bigquery_storage_v1/writer.py", line 234, in send
return self._open(request)
File "{{VIRTUAL_ENVIRONMENT_PATH}}/venv/lib/python3.7/site-packages/google/cloud/bigquery_storage_v1/writer.py", line 207, in _open
raise request_exception
google.api_core.exceptions. Unknown: None There was a problem opening the stream. Try turning on DEBUG level logs to
see the error.
Waiting up to 5 seconds.
Sent all pending logs.
Here is my script:
# [START bigquerystorage_append_rows_default]
"""
This code sample demonstrates how to write records
using the low-level generated client for Python.
"""
from xmlrpc.client import boolean
from google.cloud import bigquery_storage_v1
from google.cloud.bigquery_storage_v1 import types
from google.cloud.bigquery_storage_v1 import writer
from google.protobuf import descriptor_pb2
import logging
import google.cloud.logging
#from google.cloud import logging
# If you update the customer_record.proto protocol buffer definition, run:
#
# protoc --python_out=. customer_record.proto
#
# from the samples/snippets directory to generate the debezium_record_pb2.py module.
import debezium_record_pb2
def create_row_data(id: int, name: str, role: int, joining_date: int, last_updated: int, is_deleted: boolean):
row = debezium_record_pb2.DebeziumRecord()
row.column1 = column1
row.column2 = column2
row.column3 = column3
row.column4 = column4
row.column5 = column5
row.column6 = column6
return row.SerializeToString()
def append_rows_default(project_id: str, dataset_id: str, table_id: str):
"""Create a write stream, write some sample data, and commit the stream."""
client = google.cloud.logging.Client()
logging.basicConfig(level=logging.DEBUG)
client.setup_logging()
#logging.getLogger().setLevel(logging.INFO)
write_client = bigquery_storage_v1.BigQueryWriteClient()
parent = write_client.table_path(project_id, dataset_id, table_id)
stream_name = f'{parent}/_default'
write_stream = types.WriteStream()
#write_stream.type_ = types.WriteStream.Type.PENDING
# write_stream = write_client.create_write_stream(
# parent=parent, write_stream=write_stream
# )
#stream_name = write_stream.name
# Create a template with fields needed for the first request.
request_template = types.AppendRowsRequest()
# The initial request must contain the stream name.
request_template.write_stream = stream_name
# So that BigQuery knows how to parse the serialized_rows, generate a
# protocol buffer representation of your message descriptor.
proto_schema = types.ProtoSchema()
proto_descriptor = descriptor_pb2.DescriptorProto()
debezium_record_pb2.DebeziumRecord.DESCRIPTOR.CopyToProto(proto_descriptor)
proto_schema.proto_descriptor = proto_descriptor
proto_data = types.AppendRowsRequest.ProtoData()
proto_data.writer_schema = proto_schema
request_template.proto_rows = proto_data
# Some stream types support an unbounded number of requests. Construct an
# AppendRowsStream to send an arbitrary number of requests to a stream.
append_rows_stream = writer.AppendRowsStream(write_client, request_template)
# Create a batch of row data by appending proto2 serialized bytes to the
# serialized_rows repeated field.
proto_rows = types.ProtoRows()
proto_rows.serialized_rows.append(create_row_data(8, "E", 13, 1643673600000, 1654556118813, False))
#proto_rows.serialized_rows.append(create_row_data(2, "Bob"))
# Set an offset to allow resuming this stream if the connection breaks.
# Keep track of which requests the server has acknowledged and resume the
# stream at the first non-acknowledged message. If the server has already
# processed a message with that offset, it will return an ALREADY_EXISTS
# error, which can be safely ignored.
#
# The first request must always have an offset of 0.
request = types.AppendRowsRequest()
request.offset = 0
proto_data = types.AppendRowsRequest.ProtoData()
proto_data.rows = proto_rows
request.proto_rows = proto_data
logging.basicConfig(level=logging.DEBUG)
response_future_1 = append_rows_stream.send(request)
logging.basicConfig(level=logging.DEBUG)
print(response_future_1.result())
#print(response_future_2.result())
# Shutdown background threads and close the streaming connection.
append_rows_stream.close()
# No new records can be written to the stream after this method has been called.
write_client.finalize_write_stream(name=write_stream.name)
# Commit the stream you created earlier.
batch_commit_write_streams_request = types.BatchCommitWriteStreamsRequest()
batch_commit_write_streams_request.parent = parent
batch_commit_write_streams_request.write_streams = [write_stream.name]
write_client.batch_commit_write_streams(batch_commit_write_streams_request)
print(f"Writes to stream: '{write_stream.name}' have been committed.")
if __name__ == "__main__":
append_rows_default(“{{GCLOUD_PROJECT_NAME}}, “{{BIGQUERY_DATASET_NAME}}”, “{{BIGQUERY_TABLE}}”)
# [END bigquerystorage_append_rows_default]
This is my proto file (debezium_record_pb2.py)
syntax = "proto3";
// cannot contain fields which are not present in the table.
message DebeziumRecord {
uint32 column1 = 1;
string column2 = 2;
uint32 column3 = 3;
uint64 column4 = 4;
uint64 column5 = 5;
bool column6 = 6;
}
This is the definition of my BigQuery table
CREATE TABLE `{{GCLOUD_PROJECT_NAME}}.{{BIGQUERY_DATASET_NAME}}.{{BIGQUERY_TABLE}}`
(
column1 INT64 NOT NULL,
column2 STRING,
column3 INT64,
column4 INT64 NOT NULL,
column5 INT64,
column6 BOOL
);
I have been stuck on this error and cannot proceed further.Any pointers would be really appreciated.
Thanks
From another team member of the posting user:
We had to fix our logging output in order to see what the error actually was
We changed this portion of default_Stream.py
if __name__ == "__main__":
append_rows_default(“{{GCLOUD_PROJECT_NAME}}, “{{BIGQUERY_DATASET_NAME}}”, “{{BIGQUERY_TABLE}}”)
# [END bigquerystorage_append_rows_default]
to
if __name__ == "__main__":
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
#logging.FileHandler("debug.log"),
logging.StreamHandler()
]
)
append_rows_default(“{{GCLOUD_PROJECT_NAME}}, “{{BIGQUERY_DATASET_NAME}}”, “{{BIGQUERY_TABLE}}”)
# [END bigquerystorage_append_rows_default]
Then we ran python3 default_Stream.py --log=DEBUG
Once we were actually getting the error message logged to the standard output, we saw that the error was
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The proto field mismatched with BigQuery field at DebeziumRecord.column4, the proto field type uint64, BigQuery field type INTEGER Entity: projects/{{GCLOUD_PROJECT_NAME}}/datasets/{{BIGQUERY_DATASET_NAME}}/tables/{{BIGQUERY_TABLE}}/_default"
debug_error_string = "{"created":"#1656037879.048726680","description":"Error received from peer ipv4:142.251.6.95:443","file":"src/core/lib/surface/call.cc","file_line":966,"grpc_message":"The proto field mismatched with BigQuery field at DebeziumRecord.column4, the proto field type uint64, BigQuery field type INTEGER Entity: projects/{{GCLOUD_PROJECT_NAME}}/datasets/{{BIGQUERY_DATASET_NAME}}/tables/{{BIGQUERY_TABLE}}/_default","grpc_status":3}"
>
To fix that error we corrected the data types of column4 and column5 to be int64 instead of uint64, per https://cloud.google.com/bigquery/docs/write-api#data_type_conversions
There are still additional errors/issues with default_Stream.py that we are working through, but this was the answer to this question
I am trying to clone a volume using:
libvirt_conn = libvirt.openAuth(<qemu_addr>, <auth>, 0)
pool = libvirt_conn.storagePoolLookupByName(<pool_name>)
original_volume = pool.storageVolLookupByName(<original_volume_name>)
new_volume_xml = <xml_string>
new_volume = pool.createXMLFrom(new_volume_xml, original_volume)
When I run this I get the following error:
End of file while reading data: Input/output error
When I try:
libvirt_conn = libvirt.openAuth(<qemu_addr>, <auth>, 0)
pool = libvirt_conn.storagePoolLookupByName(<pool_name>)
original_volume = pool.storageVolLookupByName(<original_volume_name>)
new_volume_xml = <xml_string>
try:
new_volume = pool.createXMLFrom(new_volume_xml, original_volume)
except:
<next libvirt command>
I get a client socket is closed error. I have tried editing /etc/libvirt/libvirtd.conf with:
min_workers = 5
max_workers = 20
log_level = 1
log_filters="1:libvirt 1:util 1:qemu"
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
keepalive_interval = -1
When I restart libvirtd and tail /var/log/libvirt/libvirtd.log I don't see anything useful. My feeling is the socket is closing because the cloning of the volume takes a longtime, but I am not sure how to keep the libvirt/qemu socket open longer. Is this possible?
Hello I am using python with can analyzer hardware vn1610
import time
import can
count=0
a=0
for i in range(1,1000): # zero to max range ( 0 - 2048 )
a=a+1
print(a) #code stops running at a=64[enter image description here][1]
bus = can.interface.Bus(bustype='vector', app_name=None, channel=0,bitrate=500000)
msg = can.Message(arbitration_id=i, data=[0x02,0x11,0x02,0x00 ,0x00 ,0x00, 0x00, 0x00],dlc=3, extended_id=False)
bus.send(msg)
print ("Request msg:",msg)
response=bus.recv(0.02)
print ("Response msg:",response)
I am getting can.interfaces.vector.exceptions.VectorError: xlGetChannelIndex failed (XL_ERR_HW_NOT_PRESENT) as a error. What is causing this error?
It is stopping because you are creating a new interface everytime.
Probably CANalyzer supports maximum of 64 interfaces [citation needed] and that is why it stops after a = 64.
You don't have to create interface everytime.
Move
bus = can.interface.Bus(bustype='vector', app_name=None, channel=0,bitrate=500000)
out of for loop and your code should work. As you don't have to create interface again and again.
Create a bus once in a code, you can also create signal bus for various channels, as in:
can.interface.Bus(interface='vector', channel='0,1,2,3',receive_own_messages=True,bitrate=500000)
I need help is using serial port, I have 3DR radio telemetry connected to my python and other side to my windows PC ,I have small python code which continiously writes the data to serial port and reads , reading might not be an issue, or it might be later anyway...
The issue is i am afraid the too many writes might cause some buffer overflow, every time i search the solution is to enable rts/cts flow control, I dont know how to use it ?? what will happen if i set these then pyserial will do what and how can i control my write ?? its really confusing ..
hardware flow ccontol, I am not sure it might work, becuase I have just connected rx tx ground and power to my raspberry pi, even if try to connect the other flow control pins to pi, i am not sure it works or supported by 3dr radio telemetry.. I believe software flow control will be good and simple solution for now.
here is my code ..
for channel in list(self.__channelDict.values()):
# Addition for channel priority later
# We check if the channels in the list is active
if channel.getChannelActive() is True:
# Check if we have reached the max count
if (messageCount >= (self.__NoOfMessagesInUARTStream - 1)) or UARTForceSend:
self.sendUARTStream(UARTCacheBuffer, messageCount, UARTStreamCRC)
# Reset
messageCount = 0
UARTStreamCRC = 0
UARTCacheBuffer.emptyBuffer()
message = channel.RetriveMessage(queueType = 1, raw = True)
# # there is no TX message in this channel
if message is None:
continue # continue with next channel
else:
UARTStreamCRC = binascii.crc32(message, UARTStreamCRC)
UARTCacheBuffer.append(message, raw = True)
messageCount +=1
and the function to write to serial port
def sendUARTStream(self, UARTCacheBuffer, messageCount, UARTStreamCRC):
# retrieve all the data from the buffer and create a stream packet
UARTFrame = None # Used to forward the data
UARTStreamHeader = None
# Create the message header
if messageCount == 0:
# looks like all channels are empty
return 0
else:
messageArray = UARTCacheBuffer.getBuffer()
print(messageArray)
print('messageCount = ' + str(messageCount) + 'crc = ' + str(UARTStreamCRC))
UARTFrame[:self.__UARTStreamHeaderFormat.size] = self.createHeader(messageCount, UARTStreamCRC)
UARTFrame[self.__UARTStreamHeaderFormat.size : self.__UARTStreamHeaderFormat.size + self.__messageFormat * messageCount] = messageArray
# Its time to finally send the data
print('UARTFrame = ##' + str(UARTFrame))
self.__txPort.write(UARTFrame)
return messageCount