I want to generate some Modbus traffic, but I can't find any examples. In other words, I want to create a Modbus Simulator.
A good start would be to look at the examples folder.
An easy way to have something up and running is to follow these steps, assuming you have pymodbus installed:
Download and run the syncronous_server.py example from the command line
Download and run on a different command window the syncronous_client.py example.
You are done, from the output of both command lines you will be able to see the Modbus transactions that took place.
If you want to have a continuous stream of Modbus exchanges you can just modify the client to loop somewhere, for instance:
while True:
rr = client.read_holding_registers(1, 1, unit=UNIT)
time.sleep(1)
will keep reading a holding register about once every second.
There is no need to change anything on the server, it will be always listening until you kill it with Ctrl+C
Nothing will prevent you from having a different computer for the server and client as long as both are connected to the same network and you modify the client to point to the server address. In particular (line 70 on the example):
client = ModbusClient('localhost', port=5020)
Change localhost to your server's IP address, maybe something like 192.168.x.y.
In case you are not aware there are many alternatives to pymodbus to generate Modbus traffic. Modpoll is a classic, but you can also look at qModMaster.
Related
I have a ESP8266 Nodemcu device running a local HTTP server. I followed the quick-start instructions here.
My goal is to have a large number of these devices running in sync. To do that, I wrote this script:
#!/usr/bin/env python
import time
import sys
import socket
import requests
def myFunction():
#This is what I have right now...
ipAddresses = ["192.168.1.43", "192.168.1.44"]
#
#Instead, I need to search the local network for all the arduinos [ESP8266s] and append their address to the array.
#
for address in ipAddresses:
TurnOnLED(address)
def TurnOnLED(address):
try:
r = requests.post('http://' + address + '/LedON')
r.close()
except requests.exceptions.ConnectionError:
pass
#Main
def Main():
try:
print("Press CTRL-C to stop.")
while ON:
myFunction()
time.sleep(60)
except KeyboardInterrupt:
sys.exit(0)
if __name__ == "__main__":
Main()
This works, and allows me to control all of my devices from my desktop PC. My difficulty is with finding each IP address dynamically. I considered assigning a static IP address, but I would like them to be retail-optimized, so I cannot guarantee any particular IP address being unused out-of-the-box.
For the same reason, I want to be able to install the same arduino code on all of them. This is what I mean by naive. I want my python script to find the IP address of each device over the local network without any [unique] 'help' from the devices themselves. I want to install the same code on each device, and have my script find them all without any additional set up.
My first attempt was using the socket python module and looking for every hostname that began with "ESP-" since those are the first four characters of the factory hostname on all the boards. My query consistently turned up nothing, though.
This answer here has some good information, but my solution runs on a Macintosh, and I will not have the full host name, only "ESP-".
So, now that I know what are the constraints, and what is just what is in the tutorial, here are my 2 cents (keep in mind that I too, am just a hobbyist about everything that has "voltage". And not even a good one).
1st strategy : PC is server
So, if I assume that your devices are, for example, temperature sensors, and you want to frequently grab all the temperatures, then, one strategy could be that those devices all connect, every minute, to the server, report the temperature, and disconnect. For example, using a HTTPS request to do so.
So host the server on your PC, and have your devices behave as clients
# Example request from micropython
import urequests
rep = urequests.get(URL)
print(rep.text)
response.close()
2nd strategy : a third party is server
A variant of that strategy is to have a sort of central data repository, that acts as a server. So, I mean, your PC is not the server, nor any ESP, but another machine (which could be one ESP, a rasppi, or even a droplet or EC2 instance in the cloud, doesn't matter. As long as you can host a backend on it).
So, ESP8266 are clients of that server. And so is your PC. So that server has to be able to answer to request "set value from client", and "get value for the PC".
The drawback of those first 2 strategies, is that they are ok when the devices are the one sending data. It fits less if they are (as in your led example, but I surmise that was just an example) the ones receiving mostly the data. Because then, you would have to wait for the ESP to connect to have it get the message "you need to switch on the led".
3rd strategy : central registry
Or you can have it both way. That is keep your current architecture. When your PC wants something from the ESP8266, it has to connect to them, that are servers, and send it request.
But, in parallel to that, ESP8266 also behave as clients to register themselves in a central directory, which could be on the PC, or on a 3rd party. The objective of that central directory is not to gather the data from the ESP8266. Just to gather a uptodate list of them. Each minute they, in parallel with their server activity, the ESP8266 send a message "I am alive" to this central dircetory. And then the PC (that could be, or not, hosting that central directory) just need to get all IP associated with a not too old "I am alive" message, to get a list of IP.
4th strategy : ARP
Once your PC is on, it could scan the network, using an ARP request, with scapy. Search for "scanning local network with scapy".
For example here is a tutorial
From there, you get a list of IP. Associated with MAC address. And now, you need to know which ones are the ESP8266. You can here also apply several ideas. For example, you may use the MAC address to find guess which one are the ESP8266. Or you may simply try a dummy request on all found IP, to check which one are the ESP8266 (using a specific API of the server code of the ESP8266 you wrote)
Or you may decide to host the server on each ESP8266 on a specific port, like 8123, so that you can quickly rule out devices whose port 8123 is not listening.
5th strategy : don't reinvent the wheel
The best strategy is clearly a mix between my second and my third. Having a directory, and a 3rd party handling messages. But that is reinventing message brokers.
There is one well known middleware, fitted for ESP8266 (I mean, for IoT low profile devices), that is MQTT.
That needs some more tutorial reading and long trial and error on your behalf. You can start here for example. But that is just the 1st example I found on Google searching "MQTT ESP8266 micropython". There are zillions resources on that.
It may seem to be not the easiest way (compared to just copy and paste some code that list all the alive IP on a network). But in the long run, I you intend to have many ESP8266, so many of them that you can't afford to assign them static IP and simply list their IP, you probably really need a message broker like that, a preferably, not one that your reinvent
I'm making a cmd IRC client in Python. I want to receive data at the same time I can write message, in the previous code I did I could only write 2 messages and then it bugs and I can't write until it receives some kind of data.
The question is, can I have one cmd window running the received data and other one with a constant input waiting for me to write something to send?, maybe with threads?
I've looked through the subprocess library but I don't really know how to code it.
CMD1:
while Connected:
print socket.recv(1024)
CMD2:
while Connected:
text = raw_input("Text to send>> ")
socket.send(text)
(This is a pseudocode not a real one)
This approach you are proposing could be done by making a server like application, and 2 client applications that connect via localhost to send and receive events. So that way you could have 2 terminals open , connected to the same session of the server.
On the other side you should consider a different design approach that include ncurses which allow you to make a terminal ui with input and output at the same terminal (two sections up and down). You can reference: http://gnosis.cx/publish/programming/charming_python_6.html
Short version of my question:
How do I design a single Python script that can listen and respond to inputs received via HTTP or a serial port, and also initiate communications via these channels on its own? My problem is that I don't understand how to design a single script that both (i) uses a web framework to listen on some port for HTTP inputs, and (ii) also does other work that's independent of incoming HTTP requests.
Long version:
I want to use Python to design a system that does the following:
Listens to a serial port for occasional reports. Specifically, I have a network of JeeNode sensors (wireless Arduino-compatible modules) that talk to a central JeeLink, which connects to my computer via USB and talks to my Python script via pySerial.
Listens to a web URL for occasional inputs. Specifically, users send commands to the system via SMS to a Twilio number. Twilio intercepts the SMS messages and posts them to a URL I designate, and I use the Bottle micro web-framework to listen for new HTTP requests.
Responds to both types (serial and HTTP) of inputs. For example, if a user texts the command "Sleep", I want to (i) tell the sensors to go to sleep via the serial port -> JeeLink (which will then forward the command onto the remotes); and (ii) reply to the sender -- and maybe other users -- that the command has been received and is being executed.
Occasionally initiates its own communications to users (via HTTP -> Twilio -> SMS) or remote sensors (via serial -> JeeLink) without any precipitating input event. Two examples: (1) I want to report out to users or remote sensors every N minutes even if I haven't received any new inputs. (2) I want to tell users remotes have actually entered Sleep mode. Because the remotes are battery-powered, they spend most of the time in an inaccessible low-power mode. They can only receive new commands from the JeeLink when they initiate a wireless "check-in" every 5 min. So while technically remotes go to sleep (or wake up, etc.) in response to a user command, commands and responses are effectively independent.
My problem is that all of usage examples of web frameworks I've seen seem to assume that all precipitating events occur via HTTP requests. I can create a Bottle object, and use decorators to bind code to that object that get executed whenever it sees an HTTP request that matches some specified URL path. But I don't know how to do that while simultaneously doing other work that's independent of HTTP events, for example, listening to the serial port.
After struggling a lot, the potential solutions I'm considering now are:
Splitting the functionality into separate scripts. A.py listens for text messages via HTTP and writes the relevant information to some database; B.py continuously reads the database for new records and reacts accordingly, as well as listening to the serial monitor and doing other work. This seems like it would work fine, but it feels inelegant, and I suspect there's a simpler solution I'm unaware of.
Maybe the answer is related to Python decorators? I use various decorators to specify the URL paths that, when a matching HTTP request comes in, execute the code bound to the decorator. So I'm guessing that maybe there's a way to specify some other kind of decorator that, rather than listening for HTTP requests, gets executed when my "main" Python code tells it to? But I don't know enough about decorators to know if this is true.
It seems like you are trying to write an asynchronous application to manage your network of nodes via HTTP. You want to respond to incoming communications on multiple channels as they occur, you want to initiate communications on a schedule, on multiple channels, and you want those two forms of communication to interact. All of these communications are with an outside world that is slow, so it behooves you not to block if you don't need to.
It will probably be easiest to maintain your system if you organize your code into several Python modules, split by their area of concern - serial interface code, HTTP interface code, common processing code-paths, etc. Weave those components together in a central control module, which imports your libraries, and knows how to start and stop cleanly. Then you can test the serial interface independent of the web interface, and potentially reuse some of those Python modules in other projects.
I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.
I have a GSM modem that disconnect after a while, maybe because of low signal. I am just wondering is there an AT command that can detect the disconnection and re-establish a reconnection.
Is there a way in code (preferably python) I can detect the disconnection and re-establish a reconnection?
Gath
Depending on what type of connection, circuit switched (CS) or packet switched (PS), the monitoring will be a little bit different. To detect a disconnect you can enable UR (unsolicited result) code AT+CPSB=1 to monitor PDP context activity (aka packet switched connections). For circuit switched calls you can monitor with the +CIEV: UR code enabled with AT+CMER=3,0,0,2.
To re-establish the connection you have to set up the connection again. For CS you will either have to know the phone number dialed, or you can use the special form of ATD, ATDL [1] which will dial the last dialed number. You can use ATDL for PS as well if the call was started with ATD (i.e. "ATD*99*....") which is quite common, but I do not think there is any way if started with AT+CGDATA for instance.
However, none of the above related to ATD matters, because it is not what you want. For CS you might set up a call from your python script, but then so what? After receiving CONNECT all the data traffic would be coming on the serial connection that your python script are using. And for PS the connection will not even finish successfully unless the phone receives PPP traffic from the PC as part of connection establishment. Do you intend your python script to supply that?
What you really want is to trigger your PC to try to connect again, whether this is standard operating system dial up networking or some special application launching it. So monitor the modem with a python script and then take appropriate action on the PC side to re-establish the connection.
[1]
Side note to ATDL: notice that if you want to repeat the last voice call you should still terminate with a semicolon, i.e. ATDL;, otherwise you would start a data call.
Here is how I do it with Telit devices:
I use AT+CGREG=1 to subscribe to unsolicited messages. Extract from documentation:
+CGREG - GPRS Network Registration Status
AT+CGREG=[<n>]
Set command controls the presentation of an unsolicited result code
+CGREG: (see format below).
Parameter:
<n> - result code presentation mode
0 - disable network registration unsolicited result code
1 - enable network registration unsolicited result code; if there is a change in the terminal GPRS network registration status, it is issued the unsolicited result code:
+CGREG: <stat>
And I wait on the modem's serial line for +CGREG messages. When something comes I check to see if stat is 1 (connected to the home network) or 5 (connected in roaming).
NOTE: A different response +CGREG comes when issuing the AT+CGREG? which is not hard to isolate.
You can try to check the signal strength on a regular basis with AT+CSQ. If the signal goes under a given threshold consider that you are disconnected and force a new connection.
You can try the very nice pyserial http://pyserial.sourceforge.net/ Python library to send the AT commands to the modem.
I hope it helps