I'm making a cmd IRC client in Python. I want to receive data at the same time I can write message, in the previous code I did I could only write 2 messages and then it bugs and I can't write until it receives some kind of data.
The question is, can I have one cmd window running the received data and other one with a constant input waiting for me to write something to send?, maybe with threads?
I've looked through the subprocess library but I don't really know how to code it.
CMD1:
while Connected:
print socket.recv(1024)
CMD2:
while Connected:
text = raw_input("Text to send>> ")
socket.send(text)
(This is a pseudocode not a real one)
This approach you are proposing could be done by making a server like application, and 2 client applications that connect via localhost to send and receive events. So that way you could have 2 terminals open , connected to the same session of the server.
On the other side you should consider a different design approach that include ncurses which allow you to make a terminal ui with input and output at the same terminal (two sections up and down). You can reference: http://gnosis.cx/publish/programming/charming_python_6.html
Related
I want to generate some Modbus traffic, but I can't find any examples. In other words, I want to create a Modbus Simulator.
A good start would be to look at the examples folder.
An easy way to have something up and running is to follow these steps, assuming you have pymodbus installed:
Download and run the syncronous_server.py example from the command line
Download and run on a different command window the syncronous_client.py example.
You are done, from the output of both command lines you will be able to see the Modbus transactions that took place.
If you want to have a continuous stream of Modbus exchanges you can just modify the client to loop somewhere, for instance:
while True:
rr = client.read_holding_registers(1, 1, unit=UNIT)
time.sleep(1)
will keep reading a holding register about once every second.
There is no need to change anything on the server, it will be always listening until you kill it with Ctrl+C
Nothing will prevent you from having a different computer for the server and client as long as both are connected to the same network and you modify the client to point to the server address. In particular (line 70 on the example):
client = ModbusClient('localhost', port=5020)
Change localhost to your server's IP address, maybe something like 192.168.x.y.
In case you are not aware there are many alternatives to pymodbus to generate Modbus traffic. Modpoll is a classic, but you can also look at qModMaster.
I have set up a Raspberry Pi connected to an LED strip which is controllable from my phone via a Node server I have running on the RasPi. It triggers a simple python script that sets a colour.
I'm looking to expand the functionality such that I have a python script continuously running and I can send colours to it that it will consume the new colour and display both the old and new colour side by side. I.e the python script can receive commands and manage state.
I've looked into whether to use a simple loop or a deamon for this but I don't understand how to both run a script continuously and receive the new commands.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
IIUC, you don't necessarily need to have the python script running continuously. It just needs to store state, and you can do this by writing the state to a file. The script can then just read the last state file at startup, decide what to do from thereon, perform action, then update the state file.
In case you do want to actually run the script continuously though, you need a way to accept the commands. The simplest way for a daemon to accept command is probably through signal, you can use custom signal e.g. SIGUSR1 and SIGUSR2 to send and receive these notifications. These may be sufficient if your daemon only need to accept very simple request.
For more complex request where you need to actually accept messages, you can listen to a Unix socket or listen to a TCP socket. The socket module in the standard library can help you with that. If you want to build a more complex command server, then you may even want to consider running a full HTTP server, though this looks overkill for the current situation.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
There's no straightforward answer to that. It depends on case by case basis, how complex the state is, how frequently you need to change colour, how familiar you are with the languages, etc.
Another option is to have the Node app, calll the Python script as a child process, and pass to it any needed vars, and you can read python's out put as well, like so:
var exec = require('child_process').exec;
var child = exec('python file.py var1 var2', function (error, stdout, stderr) {
}
collective internet,
I am a very new programmer that has given myself a specific project to teach myself coding. I work with a lot of equipment that can take TCP commands so I set out to build a system of buttons that will send different commands per each button. I got myself a Raspberry Pi 3b and took online classes on Python. I've got reasonably far on my own (I've got the buttons working how I want!) but where I've been stuck is sending TCP commands.
To be more specific: I am sending data and it is being received but the string command is not being encoded properly. The commands are functional when I execute them in a telnet session, but obviously I want them executed as part of my script. The commands don't specify that they need to be received over a telnet session and, by other means, I've had these commands work as TCP commands exterior to a telnet session. I read about a telnet module for Python but I don't think I should need it.
I verified packet delivery with wireshark. I captured the packets sent by my script and the packets sent by the telnet session and they're similar but not the same. Horseshoes and hand grenades, right? My current method has been to just preface the string (within ') with a lower case b. I also tried putting .encode() after the string (omitting the b in that situation).
The string command has the format:
setInput "InputName" Value
So for my use case, I'm setting the input named "One" to a value of 1:
setInput "One" 1
So as you can see in my script (inserted below) I ended up using:
s.sendall(b'setInput "One" 1')
But it's not quite sending the right information because it is not working and it doesn't look the same in wireshark.
TL;DR: I'm trying to send packets via TCP but they're not being encoded properly.
Ultimately, my question is if I am even headed in the right direction using these commands and just need a different means to encode the string or if I need to explore another direction entirely (perhaps the telnet module?)
Here is the script I've been using to test and the wireshark output of my script:
import socket
import time
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('192.168.100.40', 3040))
print('connected')
time.sleep(2)
s.sendall(b'setInput "One" 1')
print('sent increase')
time.sleep(2)
s.sendall(b'setInput "One" 0')
print('sent decrease')
Wireshark log of my script
Here is the wireshark output of the telnet session that was successful:
Wireshark log of the telnet session
Any and all help is appreciated. I looked far and wide and can't seem to find any cases similar to mine.
EDITS: Sorry for the poor formatting. I appreciate the advice on how better to present posts. This is my first post here and I'm just getting the hang of it. My photos are still links due to my lack of privileges here. Sorry if I was too wordy, I just wanted to supply as much information as possible so as to help people understand my problem and, if a solution is found, to help people with a similar issue find this.
The telnet tcp data includes a carriage return and a linefeed and the end of the data. Apparently the receiving part needs this to be included to make things work. So change your Python string to
b'setInput "One" 1\r\n'
I'll try to be as clear as possible with what I'm trying to aim for.
I have a running Python script on my Raspberry Pi and I'd like multiple users to send inputs to the script remotely (through SSH or anything else that might work better).
So for example if I have this script running:
Name = input ("Please type in your name. \n")
type (Name)
print ("Hello there" , Name)
time.sleep(3) # Pause for 3 seconds.
I want users to send names to this script remotely from devices that are connected to the same network as the Raspberry Pi.
If possible, I also want to implement the following functionalities:
Sending the output (aka the printed text) back to the specific device the input came from.
A queuing system: If multiple users send names at the same time, the script will take the names in order, one by one.
I know it's a lot to ask for, but I'd really appreciate if someone could help me get started with this by pointing me in the right direction. I've searched around quite a bit for the past few days but I haven't really come across anything that fits my needs.
Edit: I'm running this on PYTHON 3
Your comment that you would like to communicate (via network) to the script directly, opens up a world of possibilities. You have to modify your Python script a little though, because it won't communicate via stdin/stdout any longer.
I'm still not entirely sure how you want things to work but it does sound to me that a solution based around RPC can possibly work for you. May I suggest you have a look at Pyro4? Basically what that does for you is enable you to do normal Python method calls, but over the network, to code running on another computer.
So you can set up a server on your Pi (that needs to run continuously) which accepts remote calls from other computers, and can then call into your python code on the pi. It can process calls in parallel or in sequence. You didn't say if you need any form of security, but some basic security features are provided (no built-in encryption or communication over TLS yet, sorry).
A simple example is here and lots more are on github so you can have a look to see if this fits your requirements?
Another solution that doesn't require third party libraries is perhaps to write a WSGI http server that calls your script, run this on the pi, and access it via HTTP from your other computers.
Short version of my question:
How do I design a single Python script that can listen and respond to inputs received via HTTP or a serial port, and also initiate communications via these channels on its own? My problem is that I don't understand how to design a single script that both (i) uses a web framework to listen on some port for HTTP inputs, and (ii) also does other work that's independent of incoming HTTP requests.
Long version:
I want to use Python to design a system that does the following:
Listens to a serial port for occasional reports. Specifically, I have a network of JeeNode sensors (wireless Arduino-compatible modules) that talk to a central JeeLink, which connects to my computer via USB and talks to my Python script via pySerial.
Listens to a web URL for occasional inputs. Specifically, users send commands to the system via SMS to a Twilio number. Twilio intercepts the SMS messages and posts them to a URL I designate, and I use the Bottle micro web-framework to listen for new HTTP requests.
Responds to both types (serial and HTTP) of inputs. For example, if a user texts the command "Sleep", I want to (i) tell the sensors to go to sleep via the serial port -> JeeLink (which will then forward the command onto the remotes); and (ii) reply to the sender -- and maybe other users -- that the command has been received and is being executed.
Occasionally initiates its own communications to users (via HTTP -> Twilio -> SMS) or remote sensors (via serial -> JeeLink) without any precipitating input event. Two examples: (1) I want to report out to users or remote sensors every N minutes even if I haven't received any new inputs. (2) I want to tell users remotes have actually entered Sleep mode. Because the remotes are battery-powered, they spend most of the time in an inaccessible low-power mode. They can only receive new commands from the JeeLink when they initiate a wireless "check-in" every 5 min. So while technically remotes go to sleep (or wake up, etc.) in response to a user command, commands and responses are effectively independent.
My problem is that all of usage examples of web frameworks I've seen seem to assume that all precipitating events occur via HTTP requests. I can create a Bottle object, and use decorators to bind code to that object that get executed whenever it sees an HTTP request that matches some specified URL path. But I don't know how to do that while simultaneously doing other work that's independent of HTTP events, for example, listening to the serial port.
After struggling a lot, the potential solutions I'm considering now are:
Splitting the functionality into separate scripts. A.py listens for text messages via HTTP and writes the relevant information to some database; B.py continuously reads the database for new records and reacts accordingly, as well as listening to the serial monitor and doing other work. This seems like it would work fine, but it feels inelegant, and I suspect there's a simpler solution I'm unaware of.
Maybe the answer is related to Python decorators? I use various decorators to specify the URL paths that, when a matching HTTP request comes in, execute the code bound to the decorator. So I'm guessing that maybe there's a way to specify some other kind of decorator that, rather than listening for HTTP requests, gets executed when my "main" Python code tells it to? But I don't know enough about decorators to know if this is true.
It seems like you are trying to write an asynchronous application to manage your network of nodes via HTTP. You want to respond to incoming communications on multiple channels as they occur, you want to initiate communications on a schedule, on multiple channels, and you want those two forms of communication to interact. All of these communications are with an outside world that is slow, so it behooves you not to block if you don't need to.
It will probably be easiest to maintain your system if you organize your code into several Python modules, split by their area of concern - serial interface code, HTTP interface code, common processing code-paths, etc. Weave those components together in a central control module, which imports your libraries, and knows how to start and stop cleanly. Then you can test the serial interface independent of the web interface, and potentially reuse some of those Python modules in other projects.