I have a python application("App1") that uses serial port /dev/ttyUSB0. This application is running as Linux service. It is running very well as it is perfectly automated for the task that I need it to perform. However, I have recently came to realize that sometimes I accidentally use the same serial port for another python application that I am developing which causes unwanted interference with "App1".
I did try to lock down "App1" as follows:
ser=serial.Serial(PORT, BAUDRATE)
fcntl.lockf(ser, fcntl.LOCK_EX | fcntl.LOCK_NB)
However, for other applications I sometimes unknowingly using
ser=serial.Serial(PORT, BAUDRATE)
without checking for ser.isOpen()
In order to prevent this I was wondering during the times I work on other applications, is there a way for ser=serial.Serial(PORT, BAUDRATE) to notify me that the serial port is already in use when I try to access it?
A solution I came up with is to create a cronjob that runs forever which essentially checks the following:
fuser -k /dev/ttyUSB0 #to get the PID of activated services that uses /dev/ttyUSB0
pkill -f <PID of second application shown in output above> #kill the application belonging to the second PID given by the above command
The above would ensure that whenever two applications use the same serial port, the second PID will get killed(I am aware there are some leaks in this logic). What do you guys think? If this is not a good solution, is there any way for ser=serial.Serial(PORT, BAUDRATE) to notify me that /dev/ttyUSB0 is already in use when I try to access it or do I need to implement the logic at driver level? Any help would be highly appreciated!
I think the simpler thing to do there is to create a different user for the "stable process", and use plain file permissions to give that user exclusive access to /dev/ttyUSB0.
With Unix groups and still plain permissions, that other process can still access any other resource it needs that would be under your main user - so it should be plain simple
If you are not aware of them, check the documentation for the commands chown and chmod.
Related
In summary; I want to be able to interact with a non-logged in steam client and log it in, and I dont want to use some type of Windows only interaction. I don't mean the SteamCLI, or logging into steam using the steam library for python. I mean directly interacting with the steam client in some way and physically logging in.
When using SteamCLI and other modules I've noticed it just logs you into their session instead of the client session that you get from physically logging into steam.
for example:
from steam.client import SteamClient
x = SteamClient()
x.login(username=, password=)
Doesn't actually log you in, since it is its own client.
I need this because I have made a script that can connect me to servers and figure out who's on it, and it relies on you being logged in to said client.
Is there any modules/libraries that will allow me to do this? and if pywinauto is the one I should use, are there any guides you know of so I can interact with said application in a good way, even on linux.
So here is what I ended up doing:
First you get a dummy display. I used this https://askubuntu.com/questions/453109/add-fake-display-when-no-monitor-is-plugged-in
After that you create the config (https://askubuntu.com/a/463000) and start up the display.
Next I created a systemd unit to launch steam using command line
[Unit]
Description=example systemd service unit file.
[Service]
User=root
Environment=XAUTHORITY=/var/run/lightdm/root/:0
Environment=DISPLAY=:0
ExecStart=/bin/bash /root/start.sh
[Install]
WantedBy=multi-user.target
(you probably shouldn't be using root for this, but its the easiest way I found to always have perms to the XAUTHORITY env, so you can login to the display)
Here is what /root/start.sh looks like
#!/bin/sh
export DISPLAY=:0
export XAUTHORITY=/var/run/lightdm/root/:0
export PATH="$PATH:/usr/games"
steam -login username_goes_here password_goes_here steam_guard_code
Note that you may not need to do the export DISPLAY and export XAUTHORITY twice. I haven't tested it.
This should just work™, it will log in, but it will ask for a steam guard code, so what you should do is attempt to start the service, receive the code through email, then append it after password.
As for doing this in python, as I said originally. you can probably use it to call start.sh, implementing command line arguments for steam code. You need these arguments because I haven't found a way for it to trust the computer you use, so it might be beneficial to implement python IMAP for reading the codes sent to the email which owns the account and using said arguments.
Theoretical Section
Please note this whole section is untested, but intended to answer my original question.
Heres what a python file might look like (very roughly)
def start_steam(code=""):
# Assuming start.sh includes reading variables from cmd line (my example didnt)
proc = subprocess.Popen(["/bin/bash", "start.sh", code])
# This function would probably just interact iwth an IMAP server to retrieve an email sent by steam support with the code. If its not found within a certain range of time, it would theoretically just stop looking
steam_guard_code = do_something_to_find_steam_guard_code_email(timeout=30)
if code:
proc.terminate()
start_steam(code=code)
else:
# proc should continue to run in the background, until restart, that means this script should ALSO run on startup, or until there isnt a PID detected for steam
with open("/var/run/steam.pid", "w") as f:
f.write(process.pid)
exit()
this python script would have to be run by systemd, it would be a forking process, and PIDfile would have to be specified by systemd (I think) so that it can be monitored.
edit: this solution has become even more theoretical, since it is not possible to pass steam guard code to steam -login username password. You would need to disable steam guard.
Preconditions:
I want to execute dyamic multiple commands via ssh from python on one remote machine at a time
I couldn't find any existing modules matching my "flavour" (If you care why, see below (*) ;))
Python scripts are running local on a Ubuntu machine
In general for single "one action calls" I simply do a native ssh call using subprocess.Popen and it works fine.
But for multiple subsequent dynamic calls, I don't want to create a new ssh connection for every command, even if the remote host might allow it. I thought of the following solution:
1) Configure my local ssh on Ubuntu to use multiplexing, so as long as a connection is open, it is used instead of creating a new one (https://www.admin-magazin.de/News/Tipps/Mit-SSH-Multiplexing-schneller-einloggen (Sorry, in german))
2) Creating an ssh connection by opening it in a running background thread, where in itself nothing is done, besides maybe a "keepalive" if necessary, or things like that, and keep the connection open till it's closed (i.e. by stopping the thread). (http://sebastiandahlgren.se/2014/06/27/running-a-method-as-a-background-thread-in-python/ )
3) Still executing ssh calls simply via subprocess.Popen, but now automatically using the open connection due to the ssh multiplexing config.
Should this work, or is there a fallacy alert?
(*)What I don't want:
Most solutions/examples I found used paramiko. On my first "happy path" it worked like charm, but the first failure test resulted in an internal AttributeError (https://github.com/paramiko/paramiko/issues/1617) and I don't want to build anything on this.
Other Libs i found like i.e. http://robotframework.org/SSHLibrary/SSHLibrary.html don't seem to have a real community using them.
pexpect....the whole "expect" concept gives me the creeps and should in my opinion only by used if there's absolutly no other reasonable reason ;)
What you've proposed is fine, but you don't even need to keep an ssh connection running in a background thread. If you configure ControlMaster (for reusing an existing connection) and ControlPerist (for keeping the master connection open even when all other connections have closed), then new ssh connections will continue to use the shared connection (as long as they happen before the ControlPersist timeout).
This means that if you set up the ControlMaster configuration external to your code (e.g., in ~/.ssh/ssh_config), your code doesn't even need to be aware of the configuration: it can just continue to call ssh normally, and ssh will take care of reusing the connection.
I have set up a Raspberry Pi connected to an LED strip which is controllable from my phone via a Node server I have running on the RasPi. It triggers a simple python script that sets a colour.
I'm looking to expand the functionality such that I have a python script continuously running and I can send colours to it that it will consume the new colour and display both the old and new colour side by side. I.e the python script can receive commands and manage state.
I've looked into whether to use a simple loop or a deamon for this but I don't understand how to both run a script continuously and receive the new commands.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
IIUC, you don't necessarily need to have the python script running continuously. It just needs to store state, and you can do this by writing the state to a file. The script can then just read the last state file at startup, decide what to do from thereon, perform action, then update the state file.
In case you do want to actually run the script continuously though, you need a way to accept the commands. The simplest way for a daemon to accept command is probably through signal, you can use custom signal e.g. SIGUSR1 and SIGUSR2 to send and receive these notifications. These may be sufficient if your daemon only need to accept very simple request.
For more complex request where you need to actually accept messages, you can listen to a Unix socket or listen to a TCP socket. The socket module in the standard library can help you with that. If you want to build a more complex command server, then you may even want to consider running a full HTTP server, though this looks overkill for the current situation.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
There's no straightforward answer to that. It depends on case by case basis, how complex the state is, how frequently you need to change colour, how familiar you are with the languages, etc.
Another option is to have the Node app, calll the Python script as a child process, and pass to it any needed vars, and you can read python's out put as well, like so:
var exec = require('child_process').exec;
var child = exec('python file.py var1 var2', function (error, stdout, stderr) {
}
I have a python application , to be more precise a Network Application that can't go down this means i can't kill the PID since it actually talks with other servers and clients and so on ... many € per minute of downtime , you know the usual 24/7 system.
Anyway in my hobby projects i also work a lot with WSGI frameworks and i noticed that i have the same problem even during off-peak hours.
Anyway imagine a normal server using TCP/UDP ( put here your favourite WSGI/SIP/Classified Information Server/etc).
Now you perform a git pull in the remote server and there goes the new python files into the server (these files will of course ONLY affect the data processing and not the actual sockets so there is no need to re-raise the sockets or touch in any way the network part).
I don't usually use File monitors since i prefer to use SIGNAL to wakeup the internal app updater.
Now imagine the following code
from mysuper.app import handler
while True:
data = socket.recv()
if data:
socket.send(handler(data))
Lets imagine that handler is a APP with DB connections, cache connections , etc.
What is the best way to update the handler.
Is it safe to call reload(handler) ?
Will this break DB connections ?
Will DB Connections survive to this restart ?
Will current transactions be lost ?
Will this create anti-matter ?
What is the best-pratice patterns that you guys usually use if there are any ?
It's safe to call reload(handler).
Depends where you initialize your connections. If you make the connections inside handler(), then yes, they'll be garbage collected when the handler() object falls out of scope. But you wouldn't be connecting inside your main loop, would you? I'd highly recommend something like:
dbconnection = connect(...)
while True:
...
socket.send(handler(data, dbconnection))
if for no other reason than that you won't be making an expensive connection inside a tight loop.
That said, I'd recommend going with an entirely different architecture. Make a listener process that does basically nothing more than listen for UDP datagrams, sends them to a messaging queue like RabbitMQ, then waits for the reply message to send the results back to the client. Then write your actual servers that get their requests from the messaging queue, process them, and send a reply message back.
If you want to upgrade the UDP server, launch the new instance listening on another port. Update your firewall rules to redirect incoming traffic to the new port. Reload the rules. Kill the old process. Voila: seamless cutover.
The real win is from uncoupling your backend. Since multiple processes can listen for the same messages from your frontend "proxy" service, you can run several in parallel - on different machines, if you want to. To upgrade the backend, start a new instance then kill the old one so that there's no time when at least one instance isn't running.
To scale your proxy, have multiple instances running on different ports or different hosts, and configure your firewall to randomly redirect incoming datagrams to one of the proxies.
To scale your backend, run more instances.
I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.