modifying apache ports.conf via fabric script - python

I am automating the deploy of a site that requires me to add a listen port to ports.conf. Right now it is ok for me to just replace the existing one but as new sites get added I would like to be able to just modify the file. I have seen examples of creating a backup of a file and writing-out the modified file in python. This seems to get me most of the way there and, python-wise, I'm sure I can figure out the rest. (making sure the change hasn't already been made, etc.) However, I'm not sure about doing this in fabric. How would I go about executing the block of python code remotely?

If you need to add a line to a configuration file (and do nothing if it's already there), you can use the append function in fabric.contrib.files.
Example:
from fabric.contrib.files import append
append('/etc/apache2/ports.conf', 'Listen 1234', use_sudo=True)
See http://docs.fabfile.org/en/1.7/api/contrib/files.html#fabric.contrib.files.append

Related

when using Watchman's watch-make I want to access the name of the changed files

I am writing a watchman command with watchman-make and I'm at a loss when trying to access exactly what was changed in the directory. I want to run my upload.py script and inside the script I would like to access filenames of newly created files in /var/spool/cups-pdf/ANONYMOUS .
so far I have
$ watchman-make -p '/var/spool/cups-pdf/ANONYMOUS' -—run 'python /home/pi/upload.py'
I'd like to add another argument to python upload.py so I can have an exact filepath to the newly created file so that I can send the new file over to my database in upload.py,
I've been looking at the docs of watchman and the closest thing I can think to use is a trigger object. Please help!
Solution with watchman-wait:
Assuming project layout like this:
/posts/_SUBDIR_WITH_POST_NAME_/index.md
/Scripts/convert.sh
And the shell script like this:
#!/bin/bash
# File: convert.sh
SrcDirPath=$(cd "$(dirname "$0")/../"; pwd)
cd "$SrcDirPath"
echo "Converting: $SrcDirPath/$1"
Then we can launch watchman-wait like this:
watchman-wait . --max-events 0 -p 'posts/**/*.md' | while read line; do ./Scripts/convert.sh $line; done
When we changing file /posts/_SUBDIR_WITH_POST_NAME_/index.md the output will be like this:
...
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
...
watchman-make is intended to be used together with tools that will perform a follow-up query of their own to discover what they want to do as a next step. For example, running the make tool will cause make to stat the various deps to bring things up to date.
That means that your upload.py script needs to know how to do this for itself if you want to use it with watchman.
You have a couple of options, depending on how sophisticated you want things to be:
Use pywatchman to issue an ad-hoc query
If you want to be able to run upload.py whenever you want and have it figure out the right thing (just like make would do) then you can have it ask watchman directly. You can have upload.py use pywatchman (the python watchman client) to do this. pywatchman will get installed if the the watchman configure script thinks you have a working python installation. You can also pip install pywatchman. Once you have it available and in your PYTHONPATH:
import pywatchman
client = pywatchman.client()
client.query('watch-project', os.getcwd())
result = client.query('query', os.getcwd(), {
"since": "n:pi_upload",
"fields": ["name"]})
print(result["files"])
This snippet uses the since generator with a named cursor to discover the list of files that changed since the last query was issued using that same named cursor. Watchman will remember the associated clock value for you, so you don't need to complicate your script with state tracking. We're using the name pi_upload for the cursor; the name needs to be unique among the watchman clients that might use named cursors, so naming it after your tool is a good idea to avoid potential conflict.
This is probably the most direct way to extract the information you need without requiring that you make more invasive changes to your upload script.
Use pywatchman to initiate a long running subscription
This approach will transform your upload.py script so that it knows how to directly subscribe to watchman, so instead of using watchman-make you'd just directly run upload.py and it would keep running and performing the uploads. This is a bit more invasive and is a bit too much code to try and paste in here. If you're interested in this approach then I'd suggest that you take the code behind watchman-wait as a starting point. You can find it here:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait
The key piece of this that you might want to modify is this line:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait#L169
which is where it receives the list of files.
Why not triggers?
You could use triggers for this, but we're steering folks away from triggers because they are hard to manage. A trigger will run in the background and have its output go to the watchman log file. It can be difficult to tell if it is running, or to stop it running.
The interface is closer to the unix model and allows you to feed a list of files on stdin.
Speaking of unix, what about watchman-wait?
We also have a command that emits the list of changed files as they change. You could potentially stream the output from watchman-wait in your upload.py. This would make it have some similarities with the subscription approach but do so without directly using the pywatchman client.

Perforce API: Get Latest Revision of a Subdirectory

I have downloaded and installed the Perforce API for Python.
I'm able to run the examples on this page:
http://www.perforce.com/perforce/doc.current/manuals/p4script/03_python.html#1127434
But unfortunately the documentation seems incomplete. For example, the P4 class has a method called run_sync, but it's not documented anywhere (in fact, it doesn't even show up if you run dir(p4) in the Python interactive interpreter, despite the fact that you can use the method just fine in the interactive interpreter.)
So I'm struggling with figuring out how to use the API for anything beyond the trivial examples on the page I linked to above.
I would like to write a script which simply downloads the latest revision of a subdirectory to the filesystem of the computer running it and does nothing else. I don't want the server to change in any way. I don't want there to be any indication that the files came from Perforce (as opposed to if you get the files via the Perforce application, it'll mark the files in your file system as read only until you check them out or whatever. That's silly - I just need to pull down a snapshot of what the subdirectory looked like at the moment the script was run.)
The Python API follows the same basic structure as the command line client (both are very thin wrappers over the same underlying API), so you'll want to look at the command line client documentation; for example, look at "p4 sync" to understand how "run_sync" in P4Python works:
http://www.perforce.com/perforce/r14.2/manuals/cmdref/p4_sync.html
For the task you're describing I would do the following (I'll describe it in terms of Perforce commands since my Python is a little rusty; once you know what commands you're running it should be pretty simple to translate into Python, since the P4Python doc has examples of things like creating and modifying a client spec, which is the hardest part):
1) Create a client that maps the desired depot directory to the desired local filesystem location, e.g. if you want the directory "//depot/foo/..." downloaded to "/usr/team/foo" you'd make a client that looks like:
Client: mytempclient123847
Root: /usr/team/foo
View:
//depot/foo/... //mytempclient123847/...
You should set the "allwrite" option on the client since you said don't want the synced files to be read-only:
Options: allwrite noclobber nocompress unlocked nomodtime rmdir
2) Sync, using the "-p" option to minimize server impact (the server will not record that you "have" the files).
3) Delete the client.
(I'm omitting some details like making sure that you're authenticated correctly -- that's a whole other potential challenge depending on your server's security and whether it's using external authentication, but it sounds like that's not the part you're having trouble with.)

How do I embed an Ipython Notebook in an iframe (new)

I have successfully achieved this using the method documented at Run IPython Notebook in Iframe from another Domain . However, this required editing the user config file. I was really hoping to be able to set this up via the command-line instead (for reasons).
http://ipython.org/ipython-doc/1/config/overview.html indicates that configuration via the command line is possible. However, all the examples are for simple true/false value assignment. To set the server up to allow embedding, it is necessary to set a value inside a dictionary. I can't work out how to pass a dictionary in through the command-line.
Another acceptable option would be a configuration overrides file.
Some people will wonder -- why all this trouble!?!
First of all, this isn't for production. I'm trying to support non-developers by writing a web-based application which integrates Ipython notebooks within it using iframes. Despite being on the same machine, it appears that the different port number used is enough to mean that I can't do simple iframe embedding without setting the x-frame insecurity bit.
Being able to do this via the command line lets me set the behaviour in the launch script rather than having to bundle a special configuration file inside my app, and also write an installer.
I really hope I've make the question clear enough! Thanks for any and all suggestions and help!
Looking over the IPython source for the loaders, it seems like it will execute whatever python code you put on the right hand side. I've not tested it, but based on the link you provided, you can probably pass something like
--NotebookApp.webapp_settings=dict('headers'=dict('X-Frame-Options'='ALLOW-FROM https://example.com/'))

Python fabric put statistics

When I put a file on a remote server (using put()), is there anyway I can see the upload information or statistics printed to the stdout file descriptor?
There's no such way according to the documentation. You could however try the project tools.
There's also the option to play with fabric's local function, but of course breaks the whole host concept.
There's also no way to make fabric more verbose than the default (except for debugging). This makes sense because fabric doesn't really work with terminal escape keys to delete lines again. Displaying statistics would print way to many lines. This would actually be a nice feature - detecting line deletions within fabric and applying them (just throwing the idea out for a potential pull request).

Back end process in windows

I need to run the python program in the backend. To the script I have given one input file and the code is processing that file and creating new output file. Now if I change the input file content I don't want to run the code again. It should run in the back end continously and generate the output file. Please if someone knows the answer for this let me know.
thank you
Basically, you have to set up a so-called FileWatcher, i.e. some mechanism which looks out for changes in a file.
There are several techniques for watching file/directory changes in python. Have a look at this question: Monitoring contents of files/directories?. Another link is here, this is about directory changes but file changes are handled in a similar way. You could also google for "watch file changes python" in order to get a lot of answers :)
Note: If you're programming in windows, you should probably implement your program as windows service, look here for how to do that.

Categories

Resources