So I have this script that connects to a linux machine while it's booting through serial to change bios settings. It's a messy bash script that connects through minicom. Thought I'd clean things up and remake the script with pyserial.
So far the basic VT100 commands work. Like '\x1B[A' for up arrow key, '\r' for enter, '\x1B' for escape. Good, that covers most the navigating and changing bios settings.
But the VT220 commands (which works in the bash script that uses minicom) don't work at all. Debug shows they get sent but they do nothing. Examples being '\x1B[3~' for delete or '\x1B[18~' for F7. I need these to actually enter the bios screen.
For the delete button I tried multiple variations:
b'\x1B[3~'
b'\x1B\x5B\x33\x7E'
'\[3~'
b'\[3~' (last two are using the actual ESC character that isn't showing up right here)
None of em work.I thought maybe I needed a different code for the python environment, so I tried firing up miniterm and sending the delete key manually with the debug filter. It works and miniterm debug says it sent '\x1B[3~', so I'm definitely using the right code.
For some extra information, I'm setting up the serial port with defaults and using the basic serial.write() function to send the command.
It doesn't really make sense to me that VT220 would directly be the issue since it should be entirely remote dependent. I have a feeling that the longer commands are maybe being sent as multiple steps, so the terminal is only receiving '\x1B[3'+'~' and not '\x1B[3~' but that's just a baseless hunch and I don't know how to verify that.
Any ideas?
Related
Good morning :)
I have some questions regarding an icecast-setup.
We have my own (Nextcloud)-Server sitting in our church. This works perfectly fine and since here in Germany the community services got forbidden already once during all this Covid19-stuff, we want to have a stream-setup. I managed to set up an icecast-instance on our server and we use my old laptop with Rocket Broadcaster and a Steinberg CI2 to provide the source for the stream. That works all as intended. We already used it once because we stopped public services for two weeks after one of our member got tested positive after he went abroad for a week.
Our operator on the PA doesn't want to have another display there, which would disturb him from listening the sermon.
My project regarding this: I have a RPi4 and a Behringer U-Phoria UMC202HD. The input atm is a standard mic, that is connected to the Interface.
The Pi is configured with darkice and uses its own icecast installation, while I am testing everything at home. Since I started the streaming project, after the service we switched Rocket Broadcaster to use VLC and a folder of old service recordings (mastered and in MP3-format) to provide a source to listen while we are at work or on travels. This option gets used pretty regularly and I want to keep it going.
My plan is to have a little box with a LED levelmeter, where the input level is shown. That should be done with a little python script. Also I want to add two buttons, where presets for the two setups get loaded. Button 1: kill the current source and start the darkice livestream. Button 2: kill the current source and start the playback of old recordings. Both options should have visual feedback for the operator. The raspberry has to work in headless mode without need for a VNC or SSH-connection for the normal usage.
My problem: I tried:
sudo killall darkice && /home/pi/darkice.sh
This code will get changed, because I probably have to use ices for the mp3-playback. So basically it kills darkice, starts the ices playback (for now only restarts the stream in a blink) and vice versa.
The bash-file exists and gets executed at reboot via cronjob. That works well. When I execute the above mentioned killall command, icecast continues the broadcast almost instantly, but the stream stops on the clients. Everyone needs to restart it.
Do I have an option to change the setup, so that I am able to switch between the two options without the need for everyone to restart?
My plan was to create a bash-script, where I do this all inside and execute it via GPIO-input and pythoncode.
Thanks in advance!
Preconditions:
I want to execute dyamic multiple commands via ssh from python on one remote machine at a time
I couldn't find any existing modules matching my "flavour" (If you care why, see below (*) ;))
Python scripts are running local on a Ubuntu machine
In general for single "one action calls" I simply do a native ssh call using subprocess.Popen and it works fine.
But for multiple subsequent dynamic calls, I don't want to create a new ssh connection for every command, even if the remote host might allow it. I thought of the following solution:
1) Configure my local ssh on Ubuntu to use multiplexing, so as long as a connection is open, it is used instead of creating a new one (https://www.admin-magazin.de/News/Tipps/Mit-SSH-Multiplexing-schneller-einloggen (Sorry, in german))
2) Creating an ssh connection by opening it in a running background thread, where in itself nothing is done, besides maybe a "keepalive" if necessary, or things like that, and keep the connection open till it's closed (i.e. by stopping the thread). (http://sebastiandahlgren.se/2014/06/27/running-a-method-as-a-background-thread-in-python/ )
3) Still executing ssh calls simply via subprocess.Popen, but now automatically using the open connection due to the ssh multiplexing config.
Should this work, or is there a fallacy alert?
(*)What I don't want:
Most solutions/examples I found used paramiko. On my first "happy path" it worked like charm, but the first failure test resulted in an internal AttributeError (https://github.com/paramiko/paramiko/issues/1617) and I don't want to build anything on this.
Other Libs i found like i.e. http://robotframework.org/SSHLibrary/SSHLibrary.html don't seem to have a real community using them.
pexpect....the whole "expect" concept gives me the creeps and should in my opinion only by used if there's absolutly no other reasonable reason ;)
What you've proposed is fine, but you don't even need to keep an ssh connection running in a background thread. If you configure ControlMaster (for reusing an existing connection) and ControlPerist (for keeping the master connection open even when all other connections have closed), then new ssh connections will continue to use the shared connection (as long as they happen before the ControlPersist timeout).
This means that if you set up the ControlMaster configuration external to your code (e.g., in ~/.ssh/ssh_config), your code doesn't even need to be aware of the configuration: it can just continue to call ssh normally, and ssh will take care of reusing the connection.
I'll try to be as clear as possible with what I'm trying to aim for.
I have a running Python script on my Raspberry Pi and I'd like multiple users to send inputs to the script remotely (through SSH or anything else that might work better).
So for example if I have this script running:
Name = input ("Please type in your name. \n")
type (Name)
print ("Hello there" , Name)
time.sleep(3) # Pause for 3 seconds.
I want users to send names to this script remotely from devices that are connected to the same network as the Raspberry Pi.
If possible, I also want to implement the following functionalities:
Sending the output (aka the printed text) back to the specific device the input came from.
A queuing system: If multiple users send names at the same time, the script will take the names in order, one by one.
I know it's a lot to ask for, but I'd really appreciate if someone could help me get started with this by pointing me in the right direction. I've searched around quite a bit for the past few days but I haven't really come across anything that fits my needs.
Edit: I'm running this on PYTHON 3
Your comment that you would like to communicate (via network) to the script directly, opens up a world of possibilities. You have to modify your Python script a little though, because it won't communicate via stdin/stdout any longer.
I'm still not entirely sure how you want things to work but it does sound to me that a solution based around RPC can possibly work for you. May I suggest you have a look at Pyro4? Basically what that does for you is enable you to do normal Python method calls, but over the network, to code running on another computer.
So you can set up a server on your Pi (that needs to run continuously) which accepts remote calls from other computers, and can then call into your python code on the pi. It can process calls in parallel or in sequence. You didn't say if you need any form of security, but some basic security features are provided (no built-in encryption or communication over TLS yet, sorry).
A simple example is here and lots more are on github so you can have a look to see if this fits your requirements?
Another solution that doesn't require third party libraries is perhaps to write a WSGI http server that calls your script, run this on the pi, and access it via HTTP from your other computers.
So I've decided to learn Python and after getting a handle on the basic syntax of the language, decided to write a "practice" program that utilizes various modules.
I have a basic curses interface made already, but before I get too far I want to make sure that I can redirect standard input and output over a network connection. In effect, I want to be able to "serve" this curses application over a TCP/IP connection.
Is this possible and if so, how can I redirect the input and output of curses over a network socket?
This probably won't work well. curses has to know what sort of terminal (or terminal emulator, these days) it's talking to, in order to choose the appropriate control characters for working with it. If you simply redirect stdin/stdout, it's going to have no way of knowing what's at the other end of the connection.
The normal way of doing something like this is to leave the program's stdin/stdout alone, and just run it over a remote login. The remote access software (telnet, ssh, or whatever) will take care of identifying the remote terminal type, and letting the program know about it via environment variables.
I've written a python script that can log into multiple devices sequentially by reading a csv file listing the different IP addresses. From there it outputs a file for each device with the content from a few commands that are passed to the devices via the script. So I've come pretty far. A problem I'm running into is that sometimes the script hangs. And that is because some devices have different software revisions and do not support certain commands that are being passed to them. The difference I'm focusing on is the prompt after log in. For example, logging into device type A has a command prompt of xyz#. Device type B has a command type abc:. It's the same manufacturer, just a different model and/or software rev. Depending on the command prompt I know the commands I can run on that device without the script hanging up. So what I need to be able to do is after a successful login, depending on the command prompt I get run a set of specific commands.
I can post some of my code if that would help but what I'm really looking to find out is if this is even possible. And if so, so pointers. A few suggestions on what I might try. After working with Python for a few months I know there has to be a way to do this. I usually don't post because I can work through others' posts and develop a working solution. But I've been working on this a bit and haven't been able to piece it together so looking I'm for an assist.
-Shane
EDIT
At this point I'm still unable to write code the would determine the command prompt. Well at least while the telnet session is up. I can telnet in, run some commands and close the session. I can then write the results to a file. And from there read the file to determine the prompt. But ideally I'd like to be able to open a telnet session, run a command to determine the prompt while the session is still open, read it while the session is up and then based on the prompt run specific commands.
The issue seems to be with not being able to read any command output while the telnet session is still up. The session has to close and then write all output to a file. Then read the file to determine the command prompt, determine which commands to run based on the prompt, then open a new telnet session and run those commands.
Should I accept the fact that I have to close the telnet session, write the data to a file, read it to determine prompt and then loop back through the login part of the script again? Or am I missing something? Not sure if I'm bring clear in my description.
I would implement the commands using a common interface and then use a dictionary to retrieve them when I know what system I am connected to:
# command set for system xyz#
def copy1(src, dest):
pass
def list1():
pass
# command set for system abc:
def copy2(src, dest):
pass
def list2():
pass
cmdDict = {
# prompt command set
'xyz#': [copy1, list1],
'abc:' [copy2, list2],
}
...
# guess the right commands from the prompt we have read
copyCommand = cmdDict[detected_prompt][0]
listCommand = cmdDict[detected_prompt][1]
...
# proceed normally
listCommand()
copyCommand(f1, g1)
copyCommand(f2, g2)