atomic write with Python's print function - python

I want to do an atomic write with Python's print function. I found this answer, already:
How can I do an atomic write to stdout in python?
But this uses sys.stdout.write. I want to be more flexible and use print instead. When I implemented the same code with print, apparently this matters, since my output turned out not to be correct.
lock = Lock()
def synced_out(*inp):
with lock:
print(*inp, file=args.out, sep=args.field_seperator)
Apparently it matters that I use print and not sys.stdout.write.
Full code here, if you expect that is not possible and I might be doing something else wrong:
https://termbin.com/s9ox
In the case of corruption the file was sys.stdout, but I was using redirection and send it to file anyway. I want to preserve however the --out flag so that people with less understanding of "> file" can also use it, or if it is used with a pipe, just maintaining this flexibility.
Python 3.5.2
Linux Ubuntu 16.04

Related

Equivalent of subprocess.call() in C [duplicate]

In C, how should I execute external program and get its results as if it was ran in the console?
if there is an executable called dummy, and it displays 4 digit number in command prompt when executed, I want to know how to run that executable and get the 4 digit number that it had generated. In C.
popen() handles this quite nicely. For instance if you want to call something and read the results line by line:
char buffer[140];
FILE *in;
extern FILE *popen();
if(! (in = popen(somecommand, "r"""))){
exit(1);
}
while(fgets(buff, sizeof(buff), in) != NULL){
//buff is now the output of your command, line by line, do with it what you will
}
pclose(in);
This has worked for me before, hopefully it's helpful. Make sure to include stdio in order to use this.
You can use popen() on UNIX.
This is not actually something ISO C can do on its own (by that I mean the standard itself doesn't provide this capability) - possibly the most portable solution is to simply run the program, redirecting its standard output to a file, like:
system ("myprog >myprog.out");
then use the standard ISO C fopen/fread/fclose to read that output into a variable.
This is not necessarily the best solution since that may depend on the underlying environment (and even the ability to redirect output is platform-specific) but I thought I'd add it for completeness.
There is popen() on unix as mentioned before, which gives you a FILE* to read from.
Alternatively on unix, you can use a combination of pipe(), fork(), exec(), select(), and read(), and wait() to accomplish the task in a more generalized/flexible way.
The popen library call invokes fork and pipe under the hood to do its work. Using it, you're limited to simply reading whatever the process dumps to stdout (which you could use the underlying shell to redirect). Using the lower-level functions you can do pretty much whatever you want, including reading stderr and writing stdin.
On windows, see calls like CreatePipe() and CreateProcess(), with the IO members of STARTUPINFO set to your pipes. You can get a file descriptor to do read()'s using _open_ofshandle() with the process handle. Depending on the app, you may need to read multi-threaded, or it may be okay to block.

What is the python equivalent to $|=1 in Perl

Hi I'm very new to Perl and CGI.
I'm trying to convert a perl script to python.
It is mentioned $|=1 in the script. What I understood is it clears the buffer.
I am searching if there is any python equivalent to do the exact thing.
Any suggestions??
I'd consider not worrying about porting this line for the time being, as flushing stdout after every print will likely be the least of your porting worries.
But if it is, you have many options:
Simply add the flush=True keyword argument to your print function call.
Run Python in "unbuffered" mode with the -u switch.
Re-open stdout in unbuffered mode (e.g. the final 0 in):
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
Write a print wrapper function that shadows the builtin print and flushes stdout
Write a TextIOWrapper object that wraps sys.stdout and flushes
I'll try to find some links for the rest of the points and edit them in.

How can I redirect outputs from a Python process into a Rust process?

I am trying to spawn a Rust process from a Python program and redirect Python's standard output into its standard input. I have used the following function:
process = subprocess.Popen(["./target/debug/mypro"], stdin=subprocess.PIPE)
and tried to write to the subprocess using:
process.stdin.write(str.encode(json.dumps(dictionnaire[str(index)]))) #Write bytes of Json representation of previous track
I am not getting any errors but standard input in Rust doesn't seem to take any input and standard output isn't printing anything at all.
Here's the version of the Rust code I am currently running:
extern crate rustc_serialize;
use rustc_serialize::json::Json;
use std::fs::File;
use std::io;
use std::env;
use std::str;
fn main(){
let mut buffer = String::new();
let stdin = io::stdin();
//stdin.lock();
stdin.read_line(&mut buffer).unwrap();
println!{"{}", buffer};
println!{"ok"};
}
process.stdin.write(str.encode(json.dumps(dictionnaire[str(index)])) does not add a newline character by default, so on the Rust side I was never getting to the end of the line which was making the process block on read_line.
Adding it manually made everything work smoothly.
process.stdin.write(str.encode(json.dumps(dictionnaire[str(index)])+ "\n") )
This may be a problem on the Python side
subprocess.run(["cargo run -- " + str(r)], shell=True)
This assumes that you have a numeric file descriptor that remains open across fork and exec. Spawning processes may close file descriptors either because they're marked as CLOEXEC or due to explicit cleanup code before exec.
Before attempting to pass a numeric file descriptor as a string argument, you should make sure that they will remain valid in the new process.
A better approach is to use some process spawning API that allows you to explicitly map the file descriptors in the new process to open handles or an API that spawns a process with stdin/out tied to pipes.

Best way to pipe output of Linux sort

I would like process a file line by line. However I need to sort it first which I normally do by piping:
sort --key=1,2 data |./script.py.
What's the best to call sort from within python? Searching online I see subprocess or the sh module might be possibilities? I don't want to read the file into memory and sort in python as the data is very big.
Its easy. Use subprocess.Popen to run sort and read its stdout to get your data.
import subprocess
myfile = 'data'
sort = subprocess.Popen(['sort', '--key=1,2', myfile],
stdout=subprocess.PIPE)
for line in sort.stdout:
your_code_here
sort.wait()
assert sort.returncode == 0, 'sort failed'
I think this page will answer your question
The answer I prefer, from #Eli Courtwright is (all quoted verbatim):
Here's a summary of the ways to call external programs and the advantages and disadvantages of each:
os.system("some_command with args") passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example,
os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
http://docs.python.org/lib/os-process.html
stream = os.popen("some_command with args") will do the same thing as os.system except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything.
http://docs.python.org/lib/os-newstreams.html
The Popen class of the subprocess module. This is intended as a replacement for os.popen but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say
print Popen("echo Hello World", stdout=PIPE, shell=True).stdout.read()
instead of
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions.
http://docs.python.org/lib/node528.html
The call function from the subprocess module. This is basically just like the Popen class and takes all of the same arguments, but it simply wait until the command completes and gives you the return code. For example:
return_code = call("echo Hello World", shell=True)
http://docs.python.org/lib/node529.html
The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.
The subprocess module should probably be what you use.
I believe sort will read all data in memory, so I'm not sure you will won anything but you can use shell=True in subprocess and use pipeline
>>> subprocess.check_output("ls", shell = True)
'1\na\na.cpp\nA.java\na.php\nerase_no_module.cpp\nerase_no_module.cpp~\nWeatherSTADFork.cpp\n'
>>> subprocess.check_output("ls | grep j", shell = True)
'A.java\n'
Warning
Invoking the system shell with shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.

Get return value of ruby function in python

I have a ruby script that gets executed by a python script. From within the python script I want to access to return value of the ruby function.
Imagine, I would have this ruby script test.rb:
class TestClass
def self.test_function(some_var)
if case1
puts "This may take some time"
# something is done here with some_var
puts "Finished"
else
# just do something short with some_var
end
return some_var
end
end
Now, I want to get the return value of that function into my python script, the printed output should go to stdout.
I tried the following (example 1):
from subprocess import call
answer = call(["ruby", "-r", "test.rb", "-e", "puts TestClass.test_function('some meaningful text')"])
However, this gives me the whole output on stdout and answer is just the exit code.
Therefore i tried this (example 2):
from subprocess import check_output
answer = check_output(["ruby", "-r", "test.rb", "-e", "puts TestClass.test_function('some meaningful text')"])
This gives me the return value of the function in the else case (see test.rb) almost immediately. However, if case1 is true, answer contains the whole output, but while running test.rb nothing gets printed.
Is there any way to get the return value of the ruby function and the statements printed to stdout? Ideally, the solution requires no additional modules to install. Furthermore, I can't change the ruby code.
Edit:
Also tried this, but this also gives no output on stdout while running the ruby script (example 3):
import subprocess
process = subprocess.Popen(["ruby", "-r", "test.rb", "-e", "puts TestClass.test_function('some meaningful text')"], stdout=subprocess.PIPE)
answer = process.communicate()
I also think that this is no matter of flushing the output to stdout in the ruby script. Example 1 gives me the output immediately.
Another way of doing this, without trying to call the ruby script as an external process is to set up a xmlrpc (or jsonrpc) server with the Ruby script, and call the remote functions from Python jsonrpc client (or xmlrpc)- the value would be available inside the Python program, nad even the sntax used would be just like you were dealing with a normal Python function.
Setting up such a server to expose a couple of functions remotely is very easy in Python, and should be the same from Ruby, but I had never tried it.
Check out http://docs.python.org/library/subprocess.html#popen-constructor and look into the ruby means of flushing stdout.

Categories

Resources