I am a postdoc and I just finished a cool little scientific application in Python and want to share it with the world. It's a really useful tool for genetecists.
I'd really like to let people run this program through a CGI form interface. Since I'm not a student anymore, I no longer have webspace with a tidy little cgi-bin subdirectory that's hooked up perfectly.
I wrote a simple CGI Python program a few years ago, and was trying to use this as a template.
Here is my quesion:
My program needs to create temporary files (when run from the command line it saves images to a given path).
I've read a couple tutorials on Apache, etc. and got lots of things running, but I can't figure out how to let my program write temporary files (I also don't know where these files would live, etc.). Any time I try to write to a file (in any manner) in my Python program, the CGI "crashes" and doesn't seem OK.
I am not extremely worried about security because the temporary files will only be outputs of the program (not the user input).
And while I'm asking (I'm assuming you're kind of a CGI ninja if you got this far and weren't bored), do you know my CGI program can take a file argument without making a temporary file?
My previous approach to this was to simply take a list of text as an argument:
try:
if item.file:
data = item.file.read()
if check:
Tools_file.main(["ExeName", "-d", "-w " + data])
else:
Tools_file.main(["ExeName", "-s", "-d", "-w " + data])
...
I'd like to do this the right way! Cheers in advance.
Stack overflowingly yours,
Oliver
Well, the "right" way is probably to re-work things using an existing web framework like Django. It's probably overkill in this case. Don't underestimate the security aspects here. They're probably more relevant than you think.
All that said, you probably want to use Python's temp file module from the standard library:
http://docs.python.org/library/tempfile.html
It'll generally write stuff out to /tmp/whatever if you're on unix. If your program is crashing only when run under apache (but runs fine when you execute it directly), check your permissions. Make sure your apache user has permission to write to wherever you've decided to store your temp files. Make sure the temp files are written with appropriate permissions (don't want to write a file that you can't read later on).
As Paul McMillan said, use tempfile:
temp, temp_filename = tempfile.mkstemp(text = True)
temp_output = os.fdopen(temp, 'w')
temp_output.write(something_or_other)
temp_output.close()
My personal opinion is that frameworks are a big time sink unless you really need the prebuilt functionality. CGI is far simpler and can probably work for your application, at least until it gets really popular.
Related
I saw this post on Medium, and wondered how one might go about managing multiple python scripts.
How I Hacked Amazon's Wifi Button
This describes a system where you need to run one or more scripts continuously to catch and react to events in your network.
My question: Let's say I had multiple python scripts that I wanted to do run while I work on other things. What approaches are available to manage these scripts? I have to imagine there is a better way than having a large number of terminal windows running each script individually.
I am coming back to python, and have no formal training in computer programming, so any guidance you can provide will be greatly appreciated.
Let's say I had multiple python scripts that I wanted to do run. What
approaches are available to manage these scripts? I have to imagine
there is a better way than having a large number of terminal windows
running each script individually.
If you have several .py files in a directory that you want to run, without having a specific order, you can do:
import glob
pyFiles = glob.glob('path/*.py')
for pyFile in pyFiles:
execfile(pyFile)
Your system already runs a large number of background processes, with output to the system log or occasionally to a service-specific log file.
A common arrangement for quick and dirty deployments -- where you don't necessarily want to invest in making the scripts robust and well-behaved enough to run as proper services -- is to start the script inside screen or tmux. You can detach when you don't need to be looking at it, and can reattach at any time -- even from a remote login -- to view the output, or to troubleshoot.
Take a look at luigi (I've not used it).
https://github.com/spotify/luigi
These days (five years after the question was asked) a lot of people use docker compose. But that's a little heavy weight depending on what you want to do.
I just saw today the script server of bugy. Maybe it might be a solution for you or somebody else.
(I am just trying to find a tampermonkey script structure for python..)
I have a program (not written by me) that I would like to use. It authenticates to an online service using a username and password that I would like to keep private. The authentication information may be passed to the program in two ways: either directly as command-line arguments or via a plaintext configuration file, neither of which seem particularly secure.
I would like to write a Python script to manage the launching of this program and keep my credentials away from the prying eyes of other users of the machine. I am running in a Linux environment. My concerns with the command-line approach are that the command line used to run the program is visible to other users via the /proc filesystem. Likewise, a plaintext configuration file could be vulnerable to reading by someone with the appropriate permissions, like a sysadmin.
Does anyone have any suggestions as to a good way to do this? If I had some way of obscuring the arguments used at the command line from the rest of the system, or a way to generate a configuration file that could be read just once (conceptually, if I could pipe the configuration data from the script to the program), I would avoid the situation where my credentials are sitting around potentially readable by someone else on the system.
Tough problem. If you have access to source code of the program in question, you can change argv[0] after startup. On most flavors of *nix, this will work.
The config file approach may be better from a security perspective.
If the config file can be specified at run time, you could generate a temp file (see mkstemp), write password there, and invoke subprocess. You could even add a small delay (to give subprocess time to do its thing) and possibly even remove the config file.
Of course the best solution is to change the program in question to read password from stdin (but it sounds like you already knew that)
About the only thing you can do in this situation is store the credentials in a text file and then deny all other users of the machine permissions to read or write it. Create a user just for this script, in fact. Encrypting doesn't do much because you still have to have the key in the script, or somewhere the script can read it, so it's the same basic attack.
We have a process here at work where an input file is created by SAS. That input file is then read by a legacy application and that legacy application creates results. SAS then reads the results and summarizes them. A non-programmer usually takes care of these actions one by one. So the person just creates the input file. They know when it is done, and then they run the legacy application, and they know when that is done. Then they run the summary program.
I have a situation where my boss wants to run about 100 variations. I have access to 3 or 4 computers that share a network drive. Here is my plan: Use computer A, I start creating the 100 input files, one by one. Using computer B, I run the legacy program on each input file. I would like to start running the program when input is ready. So if input1 is done being created on computer A, I want to run the legacy application on input1 on computer B while input2 is being created on computer A. I know python best, so I will probably use python to glue all of this together.
Now I know there is a lot of things I could do, but I think this approach is adequate and will allow me to just get the job done for the time being. I don't really have time to design and test a very elegant solution that would take advantage of all the cores on all the machines or use a database to help me synchronize all of this. I appreciate suggestions like this, but I really just want to know if there is a way, in python, to tell if a file on a network drive is open for writing by any application on any computer? If not, I will probably just come up with a dumb way to create an indicator that the job is done - like create a file "doneA" where if it exists, it means the "input1" file is complete. For example. I would add a step to the sas program that creates an indicator file after the input file has been created.
Sorry for the really long explanation, but I just don't want you to waste your time offering alternate solutions that I probably will not be able to implement.
I have read this question and its responses. I don't think I can use anything like lsof b/c these files would be open on different computers.
Write output to a temp file. When done writing, close it, then rename it to the name the other program is waiting for. That way, the file only appears when it is ready to be read.
if there is a way, in python, to tell if a file on a network drive is open for writing by any application on any computer?
Not really.
Windows will let you open the file several times and really muck things up.
You have to use some explicit synchronization. Rather than synchronize each of the three steps 100 different ways, my preference is to do the following. Create 100 copies of the three-step dance. Don't worry about synchronization among the steps.
for variant in range(100):
name= "variant_{0}.bat".format(variant)
with open(name,"w") as script:
print( "run some SAS thing", file=script )
print( "run some legacy thing", file=script )
print( "run some SAS thing", file=script )
subprocess.Popen( "start {0}".format(name), shell=True )
I suspect that this will crush the life out of your processor by running all 100 in parallel.
As a practical matter, you probably don't want to actually use subprocess.Popen() in Python. As a practical matter you probably want to create several of these "start variant_x" batch files that can run a few variants in parallel. You can create some kind of master bat file that runs a sequence of processing steps. Each step starts several parallel 3-step variants.
I'm looking for a way to run tests on command-line utilities written in bash, or any other language.
I'd like to find a testing framework that would have statements like
setup:
command = 'do_awesome_thing'
filename = 'testfile'
args = ['--with', 'extra_win', '--file', filename]
run_command command args
test_output_was_correct
assert_output_was 'Creating awesome file "' + filename + '" with extra win.'
test_file_contains_extra_win
assert_file_contains filename 'extra win'
Presumably the base test case would set up a temp directory in which to run these commands, and remove it at teardown.
I would prefer to use something in Python, since I'm much more familiar with it than with other plausible candidate languages.
I imagine that there could be something using a DSL that would make it effectively language-agnostic (or its own language, depending on how you look at it); however this might be less than ideal, since my testing techniques usually involve writing code that generates tests.
It's a bit difficult to google for this, as there is a lot of information on utilities which run tests, which is sort of the converse of what I'm looking for.
Support for doctests embedded in the output of command --help would be an extra bonus :)
Check out ScriptTest :
from scripttest import TestFileEnvironment
env = TestFileEnvironment('./scratch')
def test_script():
env.reset()
result = env.run('do_awesome_thing testfile --with extra_win --file %s' % filename)
# or use a list like ['do_awesome_thing', 'testfile', ...]
assert result.stdout.startswith('Creating awesome file')
assert filename in result.files_created
It's reasonably doctest-usable as well.
Well... What we usually do (and one of the wonders of O.O. languages) is to write all the components of an application before actually make the application. Every component might have an standalone way to be executed, for testing purpose (command line, usually), that also allows you to think in them as complete programs each by each, and use them in future projects. If what you want is to test the integrity of an existing program... well, I think the best way is to learn in deep how it work, or even deeper: read the source. Or even deeper: develop a bot to force-test it :3
Sorry that's what I have .-.
I know that this question is old, but since I was looking for an answer, I figured I would add my own for anyone else who happens along.
Full disclaimer: The project I am mentioning is my own, but it is completely free and open source.
I ran into a very similar problem, and ended up rolling my own solution. The test code will look like this:
from CLITest import CLITest, TestSuite
from subprocess import CalledProcessError
class TestEchoPrintsToScreen(CLITest):
'''Tests whether the string passed in is the string
passed out'''
def test_output_contains_input(self):
self.assertNotIsInstance(self.output, CalledProcessError)
self.assertIn("test", self.output)
def test_ouput_equals_input(self):
self.assertNotIsInstance(self.output, CalledProcessError)
self.assertEqual("test", self.output)
suite = TestSuite()
suite.add_test(TestEchoPrintsToScreen("echo test"))
suite.run_tests()
This worked well enough to get me through my issues, but I know it could use some more work to make it as robust as possible (test discovery springs to mind). It may help, and I always love a good pull request.
outside of any prepackaged testing framework that may exist but I am unaware of, I would just point out that expect is an awesome and so underutilized tool for this kind of automation, especially if you want to support multistage interaction, which is to say not just send a command and check output but respond to output with more input. If you wind up building your own system, it's worth looking into.
There is also python reimplementation of expect called pexpect.There may be some direct interfaces to the expect library available as well. I'm not a python guy so I couldn't tell you much about them.
file_read = open("/var/www/rajaneesh/file/_config.php", "r")
contents = file_read.read()
print contents
file_read.close()
The output is empty, but in that file all contents are there. Please help me how to do read and replace a string in __conifg.php.
Usually, when there is such kind of issues, it is very useful to start the interactive shell and analyze all commands.
For instance, it could be that the file does not exists (see comment from freiksenet) or you do not have privileges to it, or it is locked by another process.
If you execute the script in some system (like a web server, as the path could suggest), the exception could go to a log - or simply be swallowed by other components in the system.
On the contrary, if you execute it in the interactive shell, you can immediately see what the problem was, and eventually inspect the object (by using help(), dir() or the module inspect). By the way, this is also a good method for developing a script - just by tinkering around with the concept in the shell, then putting altogether.
While we are here, I strongly suggest you usage of IPython. It is an evolution of the standard shell, with powerful aids for introspection (just press tab, or a put a question mark after an object). Unfortunately in the latest weeks the site is not often not available, but there are good chances you already have it installed on your system.
I copied your code onto my own system, and changed the filename so that it works on my system. Also, I changed the indenting (putting everything at the same level) from what shows in your question. With those changes, the code worked fine.
Thus, I think it's something else specific to your system that we probably cannot solve here (easily).
Would it be possible that you don't have read access to the file you are trying to open?