The pymodbus connection sometimes doesn't respond? - python

I'm use "pymodbus" lib to connect PLC devices. The device is used Modbus RTU over TCP that devices will return the temperature and humidity of the environment.
map address list
0001: temperature
0002: humidity
I performed once to get value and it can succeed.
But I'm using while loop sometimes get error.
I don't know why.
code:
from time import sleep
from pymodbus.client.sync import ModbusTcpClient
from pymodbus.framer.rtu_framer import ModbusRtuFramer
from pymodbus.register_read_message import ReadHoldingRegistersResponse
client = ModbusTcpClient(host='192.168.1.1', port=5000, framer=ModbusRtuFramer)
client.connect()
while True:
rr = client.read_holding_registers(0, 2, unit=1)
if isinstance(rr, ReadHoldingRegistersResponse):
temp = rr.registers
print(temp)
else:
print('error')
sleep(1)
client.close()
output:
> ...
> [189, 444]
> [189, 443]
> [189]
> error
> error
> ...
We can see that sometimes the result is obtained normally, sometimes the result is incomplete, and sometimes the result is not available.
What should I do to solve this problem, I want to monitor this device. Thank you.

Yes, I see this all the time in my pymodbus code. I suspect there's something wrong with the implementation when doing succesive reads. What I do, is quite simply, to retry the failed read after a slight delay. And that usually gets it working again. Alternatively, try closing and re-connecting the client and re-attempt the reading. Also try increasing the sleep time. Let me know how it goes!

You can try to print what you have in your temp variable in case it's not instance of ReadHoldingRegisterResponse - it may help.
What I use to have sometimes when the device didn't sent the response yet is:
Modbus Error: [Input/Output] No Response received from the remote unit

Related

How do you read strings from a data register using pymodbus?

I'm trying to use my raspberry pi as a client for modbus communication between it and a S7-1200 Siemens plc. I got it to read integers from holding register, but not strings. When I try to search for a solution online on how to read strings from the plc holding register nothing useful comes up. I tried a program online that is supposed to be able to do it but I just keep getting an error every time I run it. Can anyone help? I have the code for the program and the error message below.
from pymodbus.client.sync import ModbusTcpClient as ModbusClient
from pymodbus.payload import BinaryPayloadDecoder
from pymodbus.constants import Endian
from pymodbus.compat import iteritems
if __name__ == '__main__':
client = ModbusClient("192.168.0.1", port=502, auto_open=True)
client.connect()
result = client.read_holding_registers(1, 1, unit=1)
print("Result : ",result)
decoder = BinaryPayloadDecoder.fromRegisters(result.registers, byteorder=Endian.Big, wordorder=Endian.Big)
decoded = {
'name': decoder.decode_string(10).decode(),
}
for name, value in iteritems(decoded):
print ("%s\t" % name, value)
client.close()
Error Message
Data Register
PLC MB Server Block Setup
for example you want to read speed of a asynchronous motor and you use a driver. You have a raspberry pi. You use python and Modbus TCP Protocol .
speed of motor is registered 40001.
result = client.read_holding_registers(1, 1, unit=1) you can read just 40001 adress.
result = client.read_input_registers(0x01,1, unit=0x01) you can read just 30001 adress.
Your Decoding is fine.
It's your setup & request that's the issue.
PLC Setup looks wrong.
You are also only requesting 1 register to read.
Your request is not getting any response (including error).
Coke = b'436f6b65' = [17263, 27493]
Pizza = b'50697a7a6100' = [20585, 31354, 24832]
Cookies = b'436f6f6b69657300' = [17263, 28523, 26981, 29440]
Pad out with 0's to keep register lengths the same.
Store in plc Modbus registers that can be read.
Coke = b'436f6b6500000000' = [17263, 27493, 0, 0]
Pizza = b'50697a7a61000000' = [20585, 31354, 24832, 0]
Cookies = b'436f6f6b69657300' = [17263, 28523, 26981, 29440]
Request 4 registers per word and decode.
you can't take strings value from modbus communication

Checking FTP connection is valid using NOOP command

I'm having trouble with one of my scripts seemingly disconnecting from my FTP during long batches of jobs. To counter this, I've attempted to make a module as shown below:
def connect_ftp(ftp):
print "ftp1"
starttime = time.time()
retry = False
try:
ftp.voidcmd("NOOP")
print "ftp2"
except:
retry = True
print "ftp3"
print "ftp4"
while (retry):
try:
print "ftp5"
ftp.connect()
ftp.login('LOGIN', 'CENSORED')
print "ftp6"
retry = False
print "ftp7"
except IOError as e:
print "ftp8"
retry = True
sys.stdout.write("\rTime disconnected - "+str(time.time()-starttime))
sys.stdout.flush()
print "ftp9"
I call the function using only:
ftp = ftplib.FTP('CENSORED')
connect_ftp(ftp)
However, I've traced how the code runs using print lines, and on the first use of the module (before the FTP is even connected to) my script runs ftp.voidcmd("NOOP") and does not except it, so no attempt is made to connect to the FTP initially.
The output is:
ftp1
ftp2
ftp4
ftp success #this is ran after the module is called
I admit my code isn't the best or prettiest, and I haven't implemented anything yet to make sure I'm not reconnecting constantly if I keep failing to reconnect, but I can't work out why this isn't working for the life of me so I don't see a point in expanding the module yet. Is this even the best approach for connecting/reconnecting to an FTP?
Thank you in advance
This connects to the server:
ftp = ftplib.FTP('CENSORED')
So, naturally the NOOP command succeeds, as it does not need an authenticated connection.
Your connect_ftp is correct, except that you need to specify a hostname in your connect call.

OPC UA server implementation

Referencing the following video: An OPC UA sample server in written in python using its opcua module
I am trying to implement this server on my own, so that I can later make modifications to it. I have PyDev for Eclipse installed as used in the video.
Here is the code used in the video:
from opcua import Server
from random import randint
import datetime
import time
server = Server()
url = "opc.tcp://192.168.0.8:4840"
server.set_endpoint(url)
name = "OPC_SIMULATION_SERVER"
addspace = server.register_namespace(name)
node = server.get_objects_node()
Param = node.add_object(addspace, "Parameters")
Temp = Param.add_variable(addspace, "Temperature", 0)
Press = Param.add_variable(addspace, "Pressure", 0)
Time = Param.add_variable(addspace, "Time", 0)
Temp.set_writable()
Press.set_writable()
Time.set_writable()
server.start()
print("Server started at {}".format(url))
while True:
Temperature = randint(10,50)
Pressure = randint(200, 999)
TIME = datetime.datetime.now()
print(Temperature, Pressure, TIME)
Temp.set_value(Temperature)
Press.set_value(Pressure)
Time.set_value(TIME)
time.sleep(2)
When I try to run this code, I get the following errors, which appear to be related to one another:
Error Image
If you could help me identify the source of errors, and possible resolutions to them I would appreciate it.
Edit: I was able to configure the server and it ran as expected. However, I now wish to connect it to a client to read in the data. When I try to do this, the connection is established but no data is read in. What are some potential fixes to this issue?

Python 3 Read data from URL [duplicate]

I have this simple minimal 'working' example below that opens a connection to google every two seconds. When I run this script when I have a working internet connection, I get the Success message, and when I then disconnect, I get the Fail message and when I reconnect again I get the Success again. So far, so good.
However, when I start the script when the internet is disconnected, I get the Fail messages, and when I connect later, I never get the Success message. I keep getting the error:
urlopen error [Errno -2] Name or service not known
What is going on?
import urllib2, time
while True:
try:
print('Trying')
response = urllib2.urlopen('http://www.google.com')
print('Success')
time.sleep(2)
except Exception, e:
print('Fail ' + str(e))
time.sleep(2)
This happens because the DNS name "www.google.com" cannot be resolved. If there is no internet connection the DNS server is probably not reachable to resolve this entry.
It seems I misread your question the first time. The behaviour you describe is, on Linux, a peculiarity of glibc. It only reads "/etc/resolv.conf" once, when loading. glibc can be forced to re-read "/etc/resolv.conf" via the res_init() function.
One solution would be to wrap the res_init() function and call it before calling getaddrinfo() (which is indirectly used by urllib2.urlopen().
You might try the following (still assuming you're using Linux):
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
res_init = libc.__res_init
# ...
res_init()
response = urllib2.urlopen('http://www.google.com')
This might of course be optimized by waiting until "/etc/resolv.conf" is modified before calling res_init().
Another solution would be to install e.g. nscd (name service cache daemon).
For me, it was a proxy problem.
Running the following before import urllib.request helped
import os
os.environ['http_proxy']=''
response = urllib.request.urlopen('http://www.google.com')

urlopen error 10045, 'address already in use' while downloading in Python 2.5 on Windows

I'm writing code that will run on Linux, OS X, and Windows. It downloads a list of approximately 55,000 files from the server, then steps through the list of files, checking if the files are present locally. (With SHA hash verification and a few other goodies.) If the files aren't present locally or the hash doesn't match, it downloads them.
The server-side is plain-vanilla Apache 2 on Ubuntu over port 80.
The client side works perfectly on Mac and Linux, but gives me this error on Windows (XP and Vista) after downloading a number of files:
urllib2.URLError: <urlopen error <10048, 'Address already in use'>>
This link: http://bytes.com/topic/python/answers/530949-client-side-tcp-socket-receiving-address-already-use-upon-connect points me to TCP port exhaustion, but "netstat -n" never showed me more than six connections in "TIME_WAIT" status, even just before it errored out.
The code (called once for each of the 55,000 files it downloads) is this:
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
datastream = opener.open(request)
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
UPDATE: I find by greping the log that it enters the download routine exactly 3998 times. I've run this multiple times and it fails at 3998 each time. Given that the linked article states that available ports are 5000-1025=3975 (and some are probably expiring and being reused) it's starting to look a lot more like the linked article describes the real issue. However, I'm still not sure how to fix this. Making registry edits is not an option.
If it is really a resource problem (freeing os socket resources)
try this:
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
retry = 3 # 3 tries
while retry :
try :
datastream = opener.open(request)
except urllib2.URLError, ue:
if ue.reason.find('10048') > -1 :
if retry :
retry -= 1
else :
raise urllib2.URLError("Address already in use / retries exhausted")
else :
retry = 0
if datastream :
retry = 0
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
if you want you can insert a sleep or you make it os depended
on my win-xp the problem doesn't show up (I reached 5000 downloads)
I watch my processes and network with process hacker.
Thinking outside the box, the problem you seem to be trying to solve has already been solved by a program called rsync. You might look for a Windows implementation and see if it meets your needs.
You should seriously consider copying and modifying this pyCurl example for efficient downloading of a large collection of files.
Instead of opening a new TCP connection for each request you should really use persistent HTTP connections - have a look at urlgrabber (or alternatively, just at keepalive.py for how to add keep-alive connection support to urllib2).
All indications point to a lack of available sockets. Are you sure that only 6 are in TIME_WAIT status? If you're running so many download operations it's very likely that netstat overruns your terminal buffer. I find that netstat stat overruns my terminal during normal useage periods.
The solution is to either modify the code to reuse sockets. Or introduce a timeout. It also wouldn't hurt to keep track of how many open sockets you have. To optimize waiting. The default timeout on Windows XP is 120 seconds. so you want to sleep for at least that long if you run out of sockets. Unfortunately it doesn't look like there's an easy way to check from Python when a socket has closed and left the TIME_WAIT status.
Given the asynchronous nature of the requests and timeouts, the best way to do this might be in a thread. Make each threat sleep for 2 minutes before it finishes. You can either use a Semaphore or limit the number of active threads to ensure that you don't run out of sockets.
Here's how I'd handle it. You might want to add an exception clause to the inner try block of the fetch section, to warn you about failed fetches.
import time
import threading
import Queue
# assumes url_queue is a Queue object populated with tuples in the form of(url_to_fetch, temp_file)
# also assumes that TotalUrls is the size of the queue before any threads are started.
class urlfetcher(threading.Thread)
def __init__ (self, queue)
Thread.__init__(self)
self.queue = queue
def run(self)
try: # needed to handle empty exception raised by an empty queue.
file_remote_path, temp_file_path = self.queue.get()
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
datastream = opener.open(request)
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
time.sleep(120)
self.queue.task_done()
elsewhere:
while url_queue.size() < TotalUrls: # hard limit of available ports.
if threading.active_threads() < 3975: # Hard limit of available ports
t = urlFetcher(url_queue)
t.start()
else:
time.sleep(2)
url_queue.join()
Sorry, my python is a little rusty, so I wouldn't be surprised if I missed something.

Categories

Resources