Is there a way to have a python program react to an opened file? For example, can I get it to do something when I open a text file or another python file?
The short answer is No.
The long answer is: It depends on what you mean by "open"—but for most reasonable definitions, on any modern macOS, it will be doable, but difficult, and will likely break in 10.14 or 10.15.
For example, let's say you're looking to hook every POSIX-level open by any process on the system. The DTrace API provides a way to do that. But if you try to use it:
$ sudo dtruss -t open_noncancel -f -p 1
… if you're on 10.9 or later, you'll see a message like this:
dtrace: system integrity protection is on, some features will not be available
And then, when someone opens a file, you'll either see nothing at all, or, at best, a string of errors like this:
dtrace: error on enabled probe ID 123 (ID 456: syscall::thread_selfid:entry): invalid user access in action #2 at DIF offset 0
You can read about SIP (System Integrity Protection) Runtime Protection here, or on various third-party blog posts like this one, but in recent versions of OS X, there's basically no way to disable it except in recovery mode without some major hackery.
Is there any way to get around it? For specific limited uses, yes. While that dtruss command above doesn't work, you can do this:
$ sudo /usr/bin/filebyproc.d
Or even this:
$ sudo dtrace -n 'syscall::open*:entry { printf("%s %s", execname, copyinstr(arg0)); }'
… and you could replace that printf with code that executes your Python script, instead of trying to run this in a subprocess and parse its output.
And you will get output… but not for all processes.
On 10.13, all processes that are specifically blacklisted by SIP won't show up at all. And sandboxed apps—which includes things like TextEdit, and everything you can install off the App Store—will only show files inside their own sandbox, not files you pass them explicitly. Which makes it a lot less useful.
What about getting around it in general? Well, then you're basically asking how to write a rootkit. Find some exploit in SIP/Darwin/Mach, do a lot of complicated work to take advantage of it, and then when 10.14 comes out, start all over again because Apple closed the exploit.
You can get alerts on create, delete, modify and move of directories / files using tools like inotify, fswatch (for OSX) or watchdog. However I'm not aware of a way to get an alert on a file open in the general case. You'd probably need to use lsof, or do what lsof does for you: scan through /proc/*/fs - polling, not an event-driven approach which is what it sounds like you want.
Related
I am currently interning at a place where they've asked me to make a standalone python program to do something (say X).
Now, that program is to be run by some commands sent by their proprietary software which is written in their proprietary language. Now the reason I'm saying proprietary so many times is because they aren't ready to take me anywhere near their code. I am just supposed to make a Python code that does X based on the input given by their software.
So is there a way I can make an API and wrap it around my code so as to let the software control it? Also I need to make the whole thing standalone (maybe an installer of some kind) so that they don't have to install Python and the accompanying modules (like opencv) just to run my script?
All I could get out of them was "there are dll files that will be calling your app and we want an executable"
Any programm can execute any other program (if it has the appropriate rights) so there is no real distinction between "python file" and "python executable" (that is because python is an interpreted language. The python source files and the "final python program" are "identical" (asuming cpython), in contrast to e.g. a C program where the source files and the executable are vastly different).
If you are on windows there is the additional problem that the user must have installed python to execute .py files. There are some ways to mitigate that problem - there are python libraries that "freeze" the python interpreter and your code into a single .exe file (from the comment by Bakuriu see e.g. freeze) . You could bundle the python interpreter with your code. You can just say your users to install python (if the amount of users is low that might be the good way).
"API" is just a fancy way of saying "this is how you communicate with my programm". This might be how you call a library (e.g. what functions a python module exports) or this might be an HTTP API or which command line arguments are passed or which protocoll over an TCP socket is spoken. Without knowing which API you are supposed to implement you cannot fulfill your job.
Without knowing further specifications (what inputdoes the other program give to yours, how does it call your programm?) it's very hard to say anything more helpful.
I recently find a fantastic python library compiling SASS really fast!
libsass-python seems to be very good and really fast
How I can use it to watch for any change in a sass folder or file and compile it in CSS ?
I do not understand how to pass a file and how to use --watch option
Thanks!
You may try Boussole that works on top of libsass-python on a per project configuration and comes with a "watch" command (using watchdog).
On top parent of your scss source directory, use:
boussole startproject
If needed, you can change settings options (from generated settings.json) then type:
boussole watch
The solution described here (--watch option) was removed from libsaas-python since version 0.13.0 (release notes), released in 2017.
Therefore this solution will no longer work.
As a replacement, you can use boussole as advertised in a subsequent answer.
The rest of this post can be ignored unless you are using versions older than 0.13.0.
According to the help instructions (http://hongminhee.org/libsass-python/sassc.html), you can watch for modifications in file simply with :
$ sassc --watch source.scss target.css
Now, I get you want to watch all the files contained in a folder, and it doesn't seem that the command-line utility provides that.
For what I can tell, I'd see two possible workarounds.
1 : launching several sassc instances, one for each of your files. It pretty dirty, but doesn't require any effort, and I guess it is okay if you don't have too many files. Don't forget to terminate all the process (with killall for instance).
$ sassc --watch a.scss a.css & sassc --watch b.scss b.css # etc.
This is really not a great way to handle things, but it can be considered a temporary solution if you're in a hurry.
2 : use libsass inside a python program that would trigger compilation when a watched file is saved. To that end you can use another library like watchdog or pyinotify.
This seems to be a much better way to handle things.
Hope this was helpful, good luck !
I would like to find out how to write Python code which sets up a process to run on startup, in this case level two.
I have done some reading, yet it has left me unclear as to which method is most reliable on different systems. I originally thought I would just edit /etc/inittab with pythons fileIO, but then I found out that my computers inittab was empty.
What should I do? Which method of setting something to startup on boot is most reliable? Does anyone have any code snippets lying around?
I may as well answer my own question with my findings.
On Debian,Ubuntu,CentOS systems there is a file named /etc/rc.local. If you use pythons' FileIO to edit that file, you can put a command that will be run at the end of all the multi-user boot levels. This facility is still present on systems that use upstart.
On BSD I have no idea. If you know how to make something go on startup please comment to improve this answer.
Archlinux and Fedora use systemd to start daemons - see the Arch wiki page for systemd. Basically you need to create a systemd service and symlink it. (Thanks Emil Ivanov)
The Story
After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python.
All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use.
The Script
I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/
The TODOs
So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in.
I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all.
Check for errors and break
Check for minor version bumps of the packages and give warnings
Check for known dependencies
Use arguments to install only some of the packages instead of commenting out lines
Organise the code in a manner that's easy to update
Optionally make the installers and compiling silent, with error logging to file
failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it
EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below)
The Gist
I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.
One way to streamline this would be to make it work with one of: capistrano/fabric, puppet/chef, jhbuild, or buildout+minitage (and a lot of cmmi tasks). There are some opportunities for factoring in common code, especially with something more high-level than bash. You will run into bootstrapping issues, however, so maybe leave good enough alone.
If you want to look into userland package managers, there is autopackage (bootstraps well), nix (quickstart), and stow (simple but helps with isolation).
Honestly, I would just build packages with a name prefix for all of the pieces and have them install under /opt so that they're out of the way. That way it only takes the download time and a bit of install time to do.
Is there anything equivalent or close in terms of functionality to Python's virtualenv, but for Perl?
I've done some development in Python and a possibility of having non-system versions of modules installed in a separate environment without creating any mess is a huge advantage. Now I have to work on a new project in Perl, and I'm looking for something like virtualenv, but for Perl. Can you suggest any Perl equivalent or replacement for python's virtualenv?
I'm trying to setup X different sets of non-system Perl packages for Y different applications to be deployed. Even worse, these applications may require different versions of the same package, so each of them may require to be installed in a separate module/library environment. You may want to do this manually for X < Y < 3. But you should not do this manually for 10 > Y > X.
Ideally what I'm looking should work like this:
perl virtualenv.pl my_environment
. my_environment/bin/activate
wget http://.../foo-0.1.tar.gz
tar -xzf foo-0.1.tar.gz ; cd foo-0.1
perl Makefile.pl
make install # <-- package foo-0.1 gets installed inside my_environment
perl -MCPAN -e 'install Bar' # <-- now package Bar with all its deps gets installed inside my_environment
There's a tool called local::lib that wraps up all of the work for you, much like virtualenv. It will:
Set up #INC in the process where it's used.
Set PERL5LIB and other such things for child processes.
Set the right variables to convince CPAN, MakeMaker, Module::Build, etc. to install libraries and store configuration in a local directory.
Set PATH so that installed binaries can be found.
Print environment variables to stdout when used from the commandline so that you can put eval $(perl -Mlocal::lib)
in your .profile and then mostly forget about it.
I've used schroot for this purpose. It is a bit heavier than virtualenv but you can be sure that nothing will leak in that shouldn't.
Schroot manages a chroot environment for you, but mounts your home directory in the chroot so it appears like a normal shell session, just using the binaries and libraries in the chroot.
I think it may be debian/ubuntu only though.
After setting up the schroot, your script above would look like
schroot -c my_perl_dev
wget ...
See http://www.debian-administration.org/articles/566 for an interesting article about it
Also checkout perl-virtualenv , this seems to be wrapper around local::lib as suggested by Hobbs, but creates a bin/activate and bin/deactivate so you can use it just like the python tool.
I've been using it quite successfully for a month or so without realising it wasn't as standards as perhaps it should be.
It makes it lot easier to set up a working virtualenv for perl as while local:lib will tell you what variables you need to set, etc. perl-virtualenv creates an activate script which does it for you.
While investigating, I discovered this and some other pages (this one is too old and misses new technologies, this reddit post is a slight misdirect).
The problem with perlbrew and plenv is that they seem to be replacements for pyenv, not virtualenv. As noted here pyenv is for managing python versions, virtualenv is for managing per-project module versions. So, yes, in some ways similar to local::lib, but with better usability.
I've not seen a proper answer to this question yet, but from what I've read, it looks like the best solution is something along the lines of:
Perl version management: plenv/perlbrew (with most people
favouring the more contemporary bash based plenv over the perl based
perlbrew from what I can see)
Module version management: Carton
Module installation: cpan (well, cpanminus anyway, ymmv)
To be honest, this is not an ideal set up, although I'm still learning, so it may yet be superior. It just doesn't feel right. It certainly isn't a like for like replacement for virtualenv.
There are a couple of posts I've found saying "it is possible" but neither has gone any further.
I am not sure whether this is the same as that virtualenv thing you are talking about, but have a look for the #INC special variable in the perlvar manpage.
Programs can modify what directories they check for libraries uwith use lib. This lib directory can be relative to the current directory. Libraries from these directories will be used before system libraries, as they are placed at the beginning of the #INC array.
I believe cpan can also install libraries to specific directories. Granted, cpan draws from the CPAN site in order to install things, so this may not be the best option.
It looks like you just need to use the INSTALL_BASE configuration for Makefile.PL (or the --install_base option for Build.PL)? What exactly do you need the solution to do for you? It sounds like you just need to get the installed module in the right place. You've presented your problem as an XY Problem by specifying what you think is the solution is rather than letting us help you with your task.
See How do I keep my own module/library directory? in perlfaq8, for instance.
If you are downloading modules from CPAN, the latest cpan command (in App::Cpan) has a -j switch to allow you to choose alternate CPAN.pm configuration files. In those configuration files you can set the CPAN.pm options to install wherever you like.
Based on your clarification, it sounds like local::lib might work for you in single, simple cases, but I do this for industrial strength deployments where I set up custom, private CPANs per application, and install directly from those custom CPANs. See my MyCPAN::App::DPAN module, for instance. From that, I use custom CPAN.pm configs that analyze their environment and set the proper values to each application can install everything in a directory just for that application.
You might also consider distributing your application as a Task::. You install it like any other Perl module, but dependencies share that same setup (i.e. INSTALL_BASE).
What I do is start the CPAN shell (cpan) and install my own Perl 5.10 from it
(I believe the command is install perl-5.10). This will ask for various configuration
settings; I make sure to make it point to paths under /usr/local
(or some other installation location other than the default).
Then I put its resulting location in my executable $PATH before the standard perl, and use its CPAN shell to install the modules I need (usually, a lot).
My Perl scripts all start with the line
#!/usr/bin/env perl
Never had a problem with this approach.