I have two Python programs, one is a IRC bot, using sockets to connect to an IRC server.
This program has a loop that reads every PRIVMSG from an specific channel.
The second program should get whatever the first program outputs (the PRIVMSG in this case) and runs functions with it.
So, it's basically:
while 1:
data = irc.recv(2048)
if data.find("PRIVMSG " + current_channel + " :") != -1:
send_to_second_program(data)
the second program is
while 1:
data = get_from_first_program()
do_stuff(data)
Is there a way to make this happen without using modules? The two programs should be separate.
Although you could use literally dozens of ways to communicate and you provide no context or requirements, I assume that you are working on a small project. I would recommend you to use rpc (remote procedure call). Using rpc you can call Python functions of another application as if it is a function available locally. Check out some rpc library for Python, like http://www.zerorpc.io/, which seems more than enough for your use case.
There are some downsides to using rpc, which you can read about in the answer of this question for example, but if the scope is limited I think this is one of easiest, non-hack, ways to achieve your goal.
Assuming that both programs are in the same directory, you can import the other file using import
For example, consider a file a.py. To import it in another file b.py use import a Then use a.something to access something defined in file a.py
Over JSON (Rest API). It's service oriented architecture. So, you can very scale your application.
Over Message queue It's scaleable too, but it's not very reliability.
Related
I have to setup a program which reads in some parameters from a widget/gui, calculates some stuff based on database values and the input, and finally sends some ascii files via ftp to remote servers.
In general, I would suggest a python program to do the tasks. Write a Qt widget as a gui (interactively changing views, putting numbers into tables, setting up check boxes, switching between various layers - never done something as complex in python, but some experience in IDL with event handling etc), set up data classes that have unctions, both to create the ascii files with the given convention, and to send the files via ftp to some remote server.
However, since my company is a bunch of Windows users, each sitting at their personal desktop, installing python and all necessary libraries on each individual machine would be a pain in the ass.
In addition, in a future version the program is supposed to become smart and do some optimization 24/7. Therefore, it makes sense to put it to a server. As I personally rather use Linux, the server is already set up using Ubuntu server.
The idea is now to run my application on the server. But how can the users access and control the program?
The easiest way for everybody to access something like a common control panel would be a browser I guess. I have to make sure only one person at a time is sending signals to the same units at a time, but that should be doable via flags in the database.
After some google-ing, next to QtWebKit, django seems to the first choice for such a task. But...
Can I run a full fledged python program underneath my web application? Is django the right tool to do so?
As mentioned previously, in the (intermediate) future ( ~1 year), we might have to implement some computational expensive tasks. Is it then also possible to utilize C as it is within normal python?
Another question I have is on the development. In order to become productive, we have to advance in small steps. Can I first create regular python classes, which later on can be imported to my web application? (Same question applies for widgets / QT?)
Finally: Is there a better way to go? Any standards, any references?
Django is a good candidate for the website, however:
It is not a good idea to run heavy functionality from a website. it should happen in a separate process.
All functions should be asynchronous, I.E. You should never wait for something to complete.
I would personally recommend writing a separate process with a message queue and the website would only ask that process for statuses and always display a result immediatly to the user
You can use ajax so that the browser will always have the latest result.
ZeroMQ or Celery are useful for implementing the functionality.
You can implement functionality in C pretty easily. I recomment however that you write that functionality as pure c with a SWIG wrapper rather that writing it as an extension module for python. That way the functionality will be portable and not dependent on the python website.
Per Python documentation, subprocess.call should be blocking and wait for the subprocess to complete. In this code I am trying to convert few xls files to a new format by calling Libreoffice on command line. I assumed that the call to subprocess call is blocking but seems like I need to add an artificial delay after each call otherwise I miss few files in the out directory.
what am I doing wrong? and why do I need the delay?
from subprocess import call
for i in range(0,len(sorted_files)):
args = ['libreoffice', '-headless', '-convert-to',
'xls', "%s/%s.xls" %(sorted_files[i]['filename'],sorted_files[i]['filename']), '-outdir', 'out']
call(args)
var = raw_input("Enter something: ") # if comment this line I dont get all the files in out directory
EDIT It might be hard to find the answer through the comments below. I used unoconv for document conversion which is blocking and easy to work with from an script.
It's possible likely that libreoffice is implemented as some sort of daemon/intermediary process. The "daemon" will (effectively1) parse the commandline and then farm the work off to some other process, possibly detaching them so that it can exit immediately. (based on the -invisible option in the documentation I suspect strongly that this is indeed the case you have).
If this is the case, then your subprocess.call does do what it is advertised to do -- It waits for the daemon to complete before moving on. However, it doesn't do what you want which is to wait for all of the work to be completed. The only option you have in that scenario is to look to see if the daemon has a -wait option or similar.
1It is likely that we don't have an actual daemon here, only something which behaves similarly. See comments by abernert
The problem is that the soffice command-line tool (which libreoffice is either just a link to, or a further wrapper around) is just a "controller" for the real program soffice.bin. It finds a running copy of soffice.bin and/or creates on, tells it to do some work, and then quits.
So, call is doing exactly the right thing: it waits for libreoffice to quit.
But you don't want to wait for libreoffice to quit, you want to wait for soffice.bin to finish doing the work that libreoffice asked it to do.
It looks like what you're trying to do isn't possible to do directly. But it's possible to do indirectly.
The docs say that headless mode:
… allows using the application without user interface.
This special mode can be used when the application is controlled by external clients via the API.
In other words, the app doesn't quit after running some UNO strings/doing some conversions/whatever else you specify on the command line, it sits around waiting for more UNO commands from outside, while the launcher just runs as soon as it sends the appropriate commands to the app.
You probably have to use that above-mentioned external control API (UNO) directly.
See Scripting LibreOffice for the basics (although there's more info there about internal scripting than external), and the API documentation for details and examples.
But there may be an even simpler answer: unoconv is a simple command-line tool written using the UNO API that does exactly what you want. It starts up LibreOffice if necessary, sends it some commands, waits for the results, and then quits. So if you just use unoconv instead of libreoffice, call is all you need.
Also notice that unoconv is written in Python, and is designed to be used as a module. If you just import it, you can write your own (simpler, and use-case-specific) code to replace the "Main entrance" code, and not use subprocess at all. (Or, of course, you can tear apart the module and use the relevant code yourself, or just use it as a very nice piece of sample code for using UNO from Python.)
Also, the unoconv page linked above lists a variety of other similar tools, some that work via UNO and some that don't, so if it doesn't work for you, try the others.
If nothing else works, you could consider, e.g., creating a sentinel file and using a filesystem watch, so at least you'll be able to detect exactly when it's finished its work, instead of having to guess at a timeout. But that's a real last-ditch workaround that you shouldn't even consider until eliminating all of the other options.
If libreoffice is being using an intermediary (daemon) as mentioned by #mgilson, then one solution is to find out what program it's invoking, and then directly invoke it yourself.
I have two computers on a network. I'll call the computer I'm working on computer A and the remote computer B. My goal is to send a command to B, gather some information, transfer this information to computer A and use this in some meaningfull way. My current method follows:
Establish a connection to B with paramiko.
Use paramiko to execute a remote command e.g. paramiko.exec_command('python file.py').
Write the information to a file with pickle, and use paramiko.ftp to transfer the file to computer A.
Open this file and parse the information back into a usable form, like a class or dictionary.
This seems very ad-hoc. My question is, is there a better way to transfer data between computers using Python? I know how to transfer files, this is a different question. I want to make an object on B and use it on A.
I would recommend taking a look at twisted. It's the best python framework I've encountered for custom client/server applications like the one you describe.
I have been doing something similar on a larger scale (with 15 clients). I use the pexpect module to do essentially ssh commands on the remote machine (computer B). Another module to look into would be the subprocess module.
We are developing Versile Python which provides such object-level interaction, you may want to have a look. It includes an object-level framework for streams which could be applied to your use case (transferring file data).
I like the python-send-buffer command, however I very often use Python embedded in applications, or launch Python via a custom package management system (to launch Python with certain dependencies).. In other words, I can't just run "python" and get a useful Python instance (something that python-send-buffer relies on)
What I would like to achieve is:
in any Python interpreter (or application that allows you to evaluate Python code), import a magic_emacs_python_server.py module (appending to sys.path as necessary)
In emacs, run magic-emacs-python-send-buffer
This would evaluate the buffer in the remote Python instance.
Seems like it should be pretty simple - the Python module listens on a socket, in a thread. It evaluates in the main thread, and returns the repr() of the result (or maybe captures the stdout/stderr, or maybe both). The emacs module would just send text to the socket, waits for a string in response, and displays it in a buffer.
Sounds so simple something like this must exist already... IPython has ipy_vimserver, but this is the wrong way around. There is also swank, while it seems very Lisp-specific, there is a Javascript backend which looks very like what I want... but searching finds almost nothing, other than some vague (possibly true) claims that SLIME doesn't work nicely with non-Lisp languages
In short:
Does a project exist to send code from an emacs buffer to an existing Python process?
If not, how would you recommend I write such a thing (not being very familiar with elisp) - SWANK? IPython's server code? Simple TCP server from scratch?
comint provides most of the infrastructure for stuff like this. There's a bunch of good examples, like this or this
It allows you to run a command, provides things comint-send-string to easily implement send-region type commands.
dbr/remoterepl on Github is a crude proof-of-concept of what I described in the question.
It lacks any kind of polish, but it mostly works - you import the replify.py module in the target interpreter, then evaluate the emacs-remote-repl.el after fixing the stupid hardcoded path to client.py
Doesn't shell-command give you what you are looking for? You could write a wrapper script or adjust the #! and sys.path appropriately.
Is it possible to use multiple languages along side with ruby. For example, I have my application code in Ruby on Rails. I would like to calculate the recommendations and I would like to use python for that. So essentially, python code would get the data and calculate all the stuff and probably get the data from DB, calculate and update the tables.Is it possible and what do you guys think about its adv/disadv
Thanks
If you are offloading work to an exterior process, you may want to make this a webservice (ajax, perhaps) of some sort so that you have some sort of consistent interface.
Otherwise, you could always execute the python script in a subshell through ruby, using stdin/stdout/argv, but this can get ugly quick.
Depending on your exact needs, you can either call out to an external process (using popen, system, etc) or you can setup another mini-web-server or something along those lines and have the rails server communicate with it over HTTP with a REST-style API (or whatever best suits your needs).
In your example, you have a ruby frontend website and then a number-crunching python backend service that builds up recommendation data for the ruby site. A fairly nice solution is to have the ruby site send a HTTP request to the python service when it needs data updating (with a payload of information to identify what it needs doing to what or some such) and then the python backend service can crunch away and update the table which presumably your ruby frontend will automatically pick up the changes of during the next request and display.
I would use the system command
as such
system("python myscript.py")
An easy, quick 'n' dirty solution in case you have python scripts and you want to execute them from inside rails, is this:
%x[shell commands or python path/of/pythonscript.py #{ruby variables to pass on the script}]
or
``shell commands or python path/of/pythonscript.py #{ruby variables to pass on the script}\ (with ` symbol in the beginning and the end).
Put the above inside a controller and it will execute.
For some reason, inside ruby on rails, system and exec commands didn't work for me (exec crashed my application and system doesn't do anything).