Can I set the twistd pid filename inside the tac configuration file? - python

I can set the filename of the .pid file by supplying the --pidfile= option to twistd. Is there a way I can specify it inside a .tac file instead?
Context:
My twisted service is a bot that plays a game and accepts multiple parameters like his name, skill level, etc. I am creating a .tac file for each bot (multiple bots can run concurrently) so that each specific bot always has the same parameters and I can launch it with twisted -y botname.tac.
I'd like the pid file to be of the form <bot_nick>.pid so that different bots don't use the same pid file and also because I can see which both are running just by listing the pid files. Is there a way I could set this in the .tac file itself or do I have to always manually specify it in the twistd command line options like twistd -y bot1.tac --pidfile=bot1.pid?

A .tac file is intended to be a description of a service that can be run; whereas the options to twistd are options about how to run a service. Therefore it doesn't make sense to put the pidfile filename, or the logging configuration, or anything like that, into a .tac file. In this case, the .pid file has already been written by the time your .tac file is being read, so there is no possible way to do it, even as a workaround.
If you want to write a specialized configuration system, it's better to write a tool that uses twistd like a library, like this example from the axiomatic tool that ships with the Axiom database. The interface could stand to be better of course – currently, you actually have to synthesize a literal command line using strings, as well as requiring subclassing – but this lets you have very fine-gained control over how your service runs, without trying to hack up other bits of global state just because .tac files happen to be Python.

Related

How to handle calling a python notification script multiple times?

I am trying to write a script to automate my backups under linux and I would like to have some kind of system tray notification (KDE) that a backup is running.
after reading this other SE post and doing some research, I cannot seem to find a DBUS library for bash, so instead I am thinking of tweaking the python script from his answer and making it into an addon for my main backup script by having my bash backup script repeatedly call the python notification script to create, update, and remove the notification when the backup is done.
However, i'm not quite sure how to implement this on the python side since if I were to just call python3 notify.py argument1 argument2 from bash, it would create a new instance of the python script every time.
Essentially, here's what i'm trying to do in my bash script:
#awesome backup script
./notification.py startbackup #this creates a new instance of the python script and sets up the KDE progress bar, possibly returning some kind of ID that is reused later?
#do backup things here.....
#periodically
./notification.py updateProgress 10%
./notification.py updateProgress 20%
#etc...
#finish the backup...
./notification.py endbackup #set the progressbar to complete and do cleanup
Since I havent done anything like this before and am not sure what to search for, I am wondering How I would go about implementing something like this in the python/bash setup I have now?
i.e. if i were to make a bash variable to store an instance ID that was returned from the first call to the python script and pass it back on each subsequent call, how would i have to write my python script in order to handle this and act on the same notification created originally, rather than creating new ones?
Either keep the process running and send commands through a pipe or use a file to store the instance ID.

In python, how can I read a file stored in a database instead filesystem and use it as parameter on Popen

I am running a command from python using subprocess Popen. Command has a parameter to indicate the path to the file it will use. That file can only be specified via that parameter.
The command would be something like:
command -file /path/to/file
I cannot do something like:
cat /path/to/file | command -file
The problem is that I have that file in a database, which I read it from my python app. I would like to avoid having to save temporarily the file to disk and specify it in the command.
Is there any way to pipe that file in memory to Popen as argument for that parameter?
Thanks!
EDIT:
Ok, I should have specified the command and the code, but I did not have it at that moment.
I am trying to do an ocsp request for a certificate using openssl. I know there is an openssl module for python, but I want to avoid extra python modules.
This is the python code:
openssl = '/path/to/openssl'
ossl = subprocess.Popen([openssl, 'ocsp', '-no_cert_verify', '-issuer', 'cacert.cer', '-serial', '0x1234', '-url', url], stdout=PIPE, stderr=PIPE)
I have cacert.cer stored in a database. I have not seen anyway to pass a piped argument to -issuer and I was hoping that there was any way to pipe it somehow using subprocess.PIPE
If the command, as you are saying, expects a path to a file in the file system in order to operate on certain data, then there is only one way to provide it with such data: have a file in the file system. If that file does not exist yet, because the data is in the database and not in an individual file, then by all means of logic, you have to create that file before invoking that command. Note, however, that there are certain tricks, especially on POSIX-compliant systems, to create such files in an efficient fashion. One of these tricks was mentioned by skyjur.
If it's unix you can use named pipe. It works like pipe but uses path on filesystem.
https://docs.python.org/2/library/os.html#os.mkfifo
Create fifo using os.mkfifo()
Start Popen and pass the fifo path as an argument
Write the content from database into fifo
Don't forget to do .close() on the opened fifo file, otherwise the process started with Popen() will never know where is the end of file.

How to open new log file for certain files in twisted

I'm running a python twisted application with lots of different services, and the log file for that application is pretty crowded with all kinds of output. So, to better see what is going on in one specific service, I would like to log messages for that service only to a different logfile. But I can't figure out how to do this.
For my application, I am using a shell script run.sh that calls twistd as follows:
twistd --logfile /var/log/whatever/path/mylogfile.log -y myapplication.py
The file myapplication.py launches all services in the application, one of which is the service I am interested in. That service has all its code in file myservice.py.
So, is there any way to specify a new log file just for my service? Do I do this in myapplication.py when I launch the service of do I do it with some python code in myservice.py?
Having seen systems that use more than one log file, I would strongly urge you not to go in this direction.
Guy's answer sounds like it is more in the right direction. To go into even more detail, though, consider using a structured log format such as the one provided by structlog (which includes Twisted integration).
Once entries in your log file are structured you will have a chance of building tools that work with them. The example Guy gave of using grep to find the events related to the service you're concerned with this a step in this direction. If you go further in this direction and say that each log event will be (for example) a json-encoded object then you can parse each line and apply arbitrarily complex filtering logic to the resulting objects.
consider the following two options:
depend of the log lines format when viewing / tailing the log do something like:
tail -f mylogfile.log | grep <something unique like your service name?>
configure twisted to use Python standard logging and tunnel log messages there, see Using the standard library logging module
It appears you could create t.p.l.LogPublisher for each service and attach a FileLogObserver to it for the actual writing into a file

Python to check if file status is being uploading

Python 2.6
My script needs to monitor some 1G files on the ftp, when ever it's changed/modified, the script will download it to another place. Those file name will remain unchanged, people will delete the original file on ftp first, then upload a newer version. My script will checking the file metadata like file size and date modified to see if any difference.
The question is when the script checking metadata, the new file may be still being uploading. How to handle this situation? Is there any file attribute indicates uploading status (like the file is locked)? Thanks.
There is no such attribute. You may be unable to GET such file, but it depends on the server software. Also, file access flags may be set one way while the file is being uploaded and then changed when upload is complete; or incomplete file may have modified name (e.g. original_filename.ext.part) -- it all depends on the server-side software used for upload.
If you control the server, make your own metadata, e.g. create an empty flag file alongside the newly uploaded file when upload is finished.
In the general case, I'm afraid, the best you can do is monitor file size and consider the file completely uploaded if its size is not changing for a while. Make this interval sufficiently large (on the order of minutes).
Your question leaves out a few details, but I'll try to answer.
If you're running your status checker
program on the same server thats
running ftp:
1) Depending on your operating system, if you're using Linux and you've built inotify into your kernel you could use pyinotify to watch your upload directory -- inotify distinguishes from open, modify, close events and lets you asynchronously watch filesystem events so you're not polling constantly. OSX and Windows both have similar but differently implemented facilities.
2) You could pythonically tail -f to see when a new file is put on the server (if you're even logging that) and just update when you see related update messages.
If you're running your program remotely
3) If your status checking utility has to run on a remote host from the FTP server, you'd have to poll the file for status and build in some logic to detect size changes. You can use the FTP 'SIZE' command for this for an easily parse-able string.
You'd have to put some logic into it such that if the filesize gets smaller you would assume it's being replaced, and then wait for it to get bigger until it stops growing and stays the same size for some duration. If the archive is compressed in a way that you could verify the sum you could then download it, checksum, and then reupload to the remote site.

Is there something between a normal user account and root?

I'm developing an application that manages network interfaces on behalf of the user and it calls out to several external programs (such as ifconfig) that requires root to make changes. (Specifically, changing the IP address of a local interface, etc.) During development, I have been running the IDE as root (ugh) and the debugger as root (double-ugh). Is there a nice way for the end-user to run these under a non-root account? I strongly dislike the size of the attack surface presented by GTK, wxPython, Python, and my application when it runs as root.
I have looked into capabilities, but they look half-baked and I'm not sure if I'd be able to use them in Python, especially if they are on a thread basis. The only option I haven't explored is a daemon that has the setuid bit set and does all the root-type stuff on behalf of the UI. I'm hesitant to introduce that complexity this early in the project, as running as root is not a dealbreaker for the users.
Your idea about the daemon has much merit, despite the complexity it introduces. As long as the actions don't require some user interface interaction as root, a daemon allows you to control what operations are allowed and disallowed.
However, you can use SUDO to create a controlled compromise between ROOT and normal users... simply grant SUDO access to the users in question for the specific tools they need. That reduces the attack surface by allowing only "permitted" root launches.
What you want is a "Group"
You create a group, specify that the account wanting to do the action belongs to the group, then you specify that the resource you want access to is a member of that group.
Sometimes group management can be kind of irritating, but it should allow you to do anything you want, and it's the user that is authorized, not your program.
(If you want your program authorized, you can create a specific user to run it as and give that user the proper group membership, then su to that group within your program to execute the operation without giving the running user the ability.)
You could create and distribute a selinux policy for your application. Selinux allows the kind of fine-grained access that you need. If you can't or won't use selinux, then the daemon is the way to go.
I would not run the application full time as root, but you might want to explore making your application setuid root, or setuid to some id that can become root using something like sudo for particular applications. You might be able to set up an account that cannot login, use setuid to change your program's id (temporarily when needed) and have sudo set up to not prompt for password, but always allow access to that account for specific tasks.
This way your program has no special privileges when running normally, only elevates it's privileges when needed, and is restricted by sudo to only running certain programs.
It's been awhile since I've done much Unix development, so I'm not really sure whether it's possible to set up sudo to not prompt for a password (or even if there is an API for it), but as a fallback you could enable setuid to root only when needed.
[EDIT] Looks like sudo has a NOPASSWD mode so I think it should work since you're running the programs as external commands.
The traditional way would be to create and use a setuid helper to do whatever you need. Note that, however, properly writing a setuid helper is tricky (there are several attack vectors you have to protect against).
The modern way would be to use a daemon (running as root, started on boot) which listens to requests from the rest of the application. This way, your attack surface is mostly limited to whichever IPC you chose (I'd suggest d-bus, which seems to be the modern way).
Finally, if you are managing network interfaces, what you doing is very similar to what network-manager does on a modern distribution. It would be a good idea to either try to somehow integrate what you are doing with network-manager (so it will not conflict with your manipulations), or at least looks at how it works.
There's no single user that is halfway between a "normal" user and root. You have root, and then you have users; users can have differing levels of capabilities. If you want something that's more powerful than a "normal" user but not as powerful as root, you just create a new user with the capabilities you want, but don't give it the privileges you don't want it to have.
I'm not familiar enough with Python to tell you what the necessary commands would be in that language, but you should be able to accomplish this by forking and using a pipe to communicate between the parent and child processes. Something along the lines of:
Run the program as root via sudo or suid
On startup, the program immediately forks and establishes a pipe for communication between the parent and child processes
The child process retains root power, but just sits there waiting for input from the pipe
The parent process drops root (changes its uid back to that of the user running it), then displays the GUI, interacts with the user, and handles all operations which are available to a non-privileged user
When an operation is to be performed which requires root privileges, the (non-root) parent process sends a command down the pipe to the (root) child process which executes it and optionally reports back to the parent
This is likely to be a bit easier to write than an independent daemon, as well as more convenient to run (since you don't need to worry about whether the daemon is running or not), while also allowing the GUI and other things which don't need root powers to be run as non-root.

Categories

Resources