can you print a file from python? - python

Is there some way of sending output to the printer instead of the screen in Python? Or is there a service routine that can be called from within python to print a file? Maybe there is a module I can import that allows me to do this?

Most platforms—including Windows—have special file objects that represent the printer, and let you print text by just writing that text to the file.
On Windows, the special file objects have names like LPT1:, LPT2:, COM1:, etc. You will need to know which one your printer is connected to (or ask the user in some way).
It's possible that your printer is not connected to any such special file, in which case you'll need to fire up the Control Panel and configure it properly. (For remote printers, this may even require setting up a "virtual port".)
At any rate, writing to LPT1: or COM1: is exactly the same as writing to any other file. For example:
with open('LPT1:', 'w') as lpt:
lpt.write(mytext)
Or:
lpt = open('LPT1:', 'w')
print >>lpt, mytext
print >>lpt, moretext
close(lpt)
And so on.
If you've already got the text to print in a file, you can print it like this:
with open(path, 'r') as f, open('LPT1:', 'w') as lpt:
while True:
buf = f.read()
if not buf: break
lpt.write(buf)
Or, more simply (untested, because I don't have a Windows box here), this should work:
import shutil
with open(path, 'r') as f, open('LPT1:', 'w') as lpt:
shutil.copyfileobj(f, lpt)
It's possible that just shutil.copyfile(path, 'LPT1:'), but the documentation says "Special files such as character or block devices and pipes cannot be copied with this function", so I think it's safer to use copyfileobj.

Python doesn't (unless you're using graphical libraries) ever send stuff to "The screen". It writes to stdout and stderr, which are, as far as Python is concerned, just things that look like files.
It's simple enough to have python direct those streams to anything else that looks like a file; for instance, see Redirect stdout to a file in Python?
On unix systems, there are file-like devices that happen to be printers (/dev/lp*); on windows, LPT1 serves a similar purpose.
Regardless of the OS, you'll have to make sure that LPT1 or /dev/lp* are actually hooked up to a printer somehow.

If you are on linux, the following works if you have your printer setup and set as your default.
from subprocess import Popen
from cStringIO import StringIO
# place the output in a file like object
sio = StringIO(output_string)
# call the system's lpr command
p = Popen(["lpr"], stdin=sio, shell=True)
output = p.communicate()[0]

Related

can I redirect the output of this program to script?

I'm running a binary that manages a usb device. The binary file, when executed outputs results to a file I specify.
Is there any way in python the redirect the output of a binary to my script instead of to a file? I'm just going to have to open the file and get it as soon as this line of code runs.
def rn_to_file(comport=3, filename='test.bin', amount=128):
os.system('capture.exe {0} {1} {2}'.format(comport, filename, amount))
it doesn't work with subprocess either
from subprocess import check_output as qx
>>> cmd = r'C:\repos\capture.exe 3 text.txt 128'
>>> output = qx(cmd)
Opening serial port \\.\COM3...OK
Closing serial port...OK
>>> output
b'TrueRNG Serial Port Capture Tool v1.2\r\n\r\nCapturing 128 bytes of data...Done'
The actual content of the file is a series of 0 and 1. This isn't redirecting the output to the file to me, instead it just prints out what would be printed out anyway as output.
It looks like you're using Windows, which has a special reserved filename CON which means to use the console (the analog on *nix would be /dev/stdout).
So try this:
subprocess.check_output(r'C:\repos\capture.exe 3 CON 128')
You might need to use shell=True in there, but I suspect you don't.
The idea is to make the program write to the virtual file CON which is actually stdout, then have Python capture that.
An alternative would be CreateNamedPipe(), which will let you create your own filename and read from it, without having an actual file on disk. For more on that, see: createNamedPipe in python

Python script stops writing but in shown as a running process

I am running a python script on a screen on linux server and when I do TOP command I can see it running but it has been for hours that the script did not write anything. Anybody know what could be the reason?
Here is my script:
import GeoIP
from netaddr import *
gi = GeoIP.GeoIP("/data/GeoIPOrg_20141202.dat", GeoIP.GEOIP_MEMORY_CACHE)
o = Path to the output text file
for line in f:
line = line.strip('\n')
asn,ip,count =line.split('|')
org = gi.org_by_addr(ip)
start,end = gi.range_by_ip(ip)
ran = list(IPRange(start,end))
# ipcount = len(ran)
ip_start,ip_end = IPAddress(start),IPAddress(end)
n_start = int(ip_start)
n_end = int(ip_end)
range = (n_start,n_end)
print ("%s|%s|%s|%s|%s|%s" % (asn,range,len(ran),org,ip,count) , file = o)
This could be a few things; hard to say without seeing how you're running and how you're initialising that file.
A definite possibility is the file isn't being flushed (more relevantly, see the docs on changing the buffer size of open() as it's probably being invoked in your code).
Either way it's worth using Python (2.5+)'s with statement to handle file / resource management neatly and robustly instead of relying on print e.g.:
with open("/my/output/path.txt", "w") as out_file:
# Rest of code
# ...
out_file.write("%s|%s|%s|%s|%s|%s\n" % (asn,range,len(ran),org,ip,count))
See this SO question for good examples of using the with statement.
You have two ways to achieve this.
You can change your code to use the with statement (context
manager) as #Nick B has suggested in his answer to open your file
there.
Or you can set the buffering where you open the file to be
line buffering.
So where you say:
# Im assuming you open your file like this since your code is
# an incomplete snippet. Otherwise tell us how you open your file
o = open('output_file.log', 'w')
You must say:
o = open('output_file.log', 'w', buffering=1) # enable line buffering
You should read the help of then open command by typing help(open) in the interactive python shell. It explains a great deal how buffering works in python.

How to open a file on mac OSX 10.8.2 in python

I am writing a python code on eclipse and want to open a file that is present in Downloads folder. I am using MAC OSX 10.8.2. I tried with f=os.path.expanduser("~/Downloads/DeletingDocs.txt")
and also with
ss=subprocess.Popen("~/Downloads/DeletingDocs.txt",shell=True)
ss.communicate()
I basically want to open a file in subprocess, to listen to the changes in the opened file.But, the file is not opening in either case.
from os.path import baspath, expanduser
filepath = abspath(expanduser("~/") + '/Downloads/DeletingDocs.txt')
print('Opening file', filepath)
with open(filepath, 'r') as fh:
print(fh.read())
Take note of OSX file-handling tho, the IO is a bit different depending on the filetype.
For instance, a .txt file which under Windows would be considered a "plain text-file" is actually a compressed data-stream under OSX because OSX tries to be "smart" about the storage space.
This can literately ruin your day unless you know about it (been there, had the headache.. moved on)
When double-clicking on a .txt file in OSX for instance normally the text-editor pops up and what it does is call for a os.open() instead of accessing it on a lower level which lets OSX middle layers do disk-area|decompression pipe|file-handle -> Texteditor but if you access the file-object on a lower level you'll end up opening the disk-area where the file is stored and if you print the data you'll get garbage because it's not the data you'd expect.
So try using:
import os
fd = os.open( "foo.txt", os.O_RDONLY )
print(os.read(fd, 1024))
os.close( fd )
And fiddle around with the flags.
I honestly can't remember which of the two opens the file as-is from disk (open() or os.open()) but one of them makes your data look like garbage and sometimes you just get the pointer to the decompression pipe (giving you like 4 bytes of data even tho the text-file is hughe).
If it's tracking/catching updates on a file you want
from time import ctime
from os.path import getmtime, expanduser, abspath
from os import walk
for root, dirs, files in walk(expanduser('~/')):
for fname in files:
modtime = ctime(getmtime(abspath(root + '/' + fname)))
print('File',fname,'was last modified at',modtime)
And if the time differs from your last check, well then do something cool with it.
For instance, you have these libraries for Python to work with:
.csv
.pdf
.odf
.xlsx
And MANY more, so instead of opening an external application as your first fix, try opening them via Python and modify to your liking instead, and only as a last resort (if even then) open external applications via Popen.
But since you requested it (sort of... erm), here's a Popen approach:
from subprocess import Popen, PIPE, STDOUT
from os.path import abspath, expanduser
from time import sleep
run = Popen('open -t ' + abspath(expanduser('~/') + '/example.txt'), shell=True, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
##== Here's an example where you could interact with the process:
##== run.stdin.write('Hey you!\n')
##== run.stdin.flush()
while run.poll() == None:
sleep(1)
Over-explaining your job:
This will print a files contents every time it's changed.
with open('test.txt', 'r') as fh:
import time
while 1:
new_data = fh.read()
if len(new_data) > 0:
fh.seek(0)
print(fh.read())
time.sleep(5)
How it works:
The regular file opener with open() as fh will open up the file and place it as a handle in fh, once you call .read() without any parameters it will fetch the entire contents of the file.
This in turn doesn't close the file, it simply places the "reading" pointer at the back of the file (lets say at position 50 for convenience).
So now your pointer is at character 50 in your file, at the end.
Wherever you write something in your file, that will put more data into it so the next .read() will fetch data from position 50+ making the .read() not empty, so we place the "reading" pointer back to position 0 by issuing .seek(0) and then we print all the data.
Combine that with os.path.getmtime() to fine any reversed changes or 1:1 ratio changes (replacing a character mis-spelling etc).
I am hesitant to answer because this question was double-posted and is confusingly phrased, but... if you want to "open" the file in OSX using the default editor, then add the open command to your subprocess. This works for me:
subprocess.Popen("open myfile.txt",shell=True)
It is most likely a permissions issue, if you try your code in the Python interpreter you will probably receive a "Permission denied" error from the shell when you call subprocess.Popen. If this is the case then you will need to make the file a minimum of 700 (it's probably 644 by default) and you'll probably want 744.
Try the code in the Python interpreter and check for the "Permission denied" error, if you see that then do this in a shell:
chmod 744 ~/Downloads/DeletingDocs.txt
Then run the script. To do it all in Python you can use os.system:
import os
import subprocess
filename = "~/Downloads/DeletingDocs.txt"
os.system("chmod 744 "+filename)
ss=subprocess.Popen(filename, shell=True)
ss.communicate()
The reason it "just works" in Windows is because Windows doesn't support file permission types (read, write and execute) in the same way as *nix systems (e.g. Linux, BSD, OS X, etc.) do.

Using Python codecs causes readline problems with sys.stdin?

I am writing a Python wrapper script (childscript.py) for a command line executable (childprogram). Another executable, (parentprogram) spawns childscript.py and pipes output into childscript.py. childscript.py spawns childprogram with:
retval = subprocess.Popen(RUNLINE, shell=False, stdout=None, stderr=None, stdin=subprocess.PIPE)
If childscript.py does a series of reads from sys.stdin straight up using readline:
line = sys.stdin.readline()
I am able to get all the output from parentprogram and feed it into childprogram.
However, if I try to use the codecs module by doing:
sys.stdin = codecs.open(sys.stdin.fileno(), encoding='iso-8859-1', mode='rb', buffering=0)
or do a:
sys.stdin = codecs.getreader('iso-8859-1')(sys.stdin.detach())
and attempt to do the read, the read does not get all the output from parentprogram. If I force additional output from parentprogram, the missing bits come out along with part of the additional output that I pushed in. It looks like childscript.py is not reading everything that it is being provided to it when I use the codecs module.
Am I doing something totally wrong? Without the codecs, childscript.py triggers an exception when presented with iso-8859-1 encoded stuff from parentprogram.
EDIT:
I discovered that Python v3.x "open" can take the encoding option as well. I changed the line to use "open" instead of "codecs.open":
sys.stdin = open(sys.stdin.fileno(), encoding='iso-8859-1', mode='r')
and it works as expected, without any of the problems that open.codecs produces. I've switched my script to use "open" instead.
If anybody can explain why the codecs module behaves differently, I'd appreciate it.
Flush the output channel in the parent.
Pipes are always buffered. The usual buffer size is 4KB. Unlike when the output is connected to the console, the standard runtime will not flush the output for you after each line.
Try this:
import sys
import os
fd = sys.stdin.fileno()
text = ''
while 1:
try:
raw_data = os.read(fd, 1024)
text += unicode(raw_data, 'iso-8859-1')
# now do something with text
except (EOFError, KeyboardInterrupt):
break
This way you avoid using readline() which will issue an error for non-ascii characters, but still can use non-blocking read.
The only problem is that you have to separate the input into lines yourself.

Downloading text files with Python and ftplib.FTP from z/os

I'm trying to automate downloading of some text files from a z/os PDS, using Python and ftplib.
Since the host files are EBCDIC, I can't simply use FTP.retrbinary().
FTP.retrlines(), when used with open(file,w).writelines as its callback, doesn't, of course, provide EOLs.
So, for starters, I've come up with this piece of code which "looks OK to me", but as I'm a relative Python noob, can anyone suggest a better approach? Obviously, to keep this question simple, this isn't the final, bells-and-whistles thing.
Many thanks.
#!python.exe
from ftplib import FTP
class xfile (file):
def writelineswitheol(self, sequence):
for s in sequence:
self.write(s+"\r\n")
sess = FTP("zos.server.to.be", "myid", "mypassword")
sess.sendcmd("site sbd=(IBM-1047,ISO8859-1)")
sess.cwd("'FOO.BAR.PDS'")
a = sess.nlst("RTB*")
for i in a:
sess.retrlines("RETR "+i, xfile(i, 'w').writelineswitheol)
sess.quit()
Update: Python 3.0, platform is MingW under Windows XP.
z/os PDSs have a fixed record structure, rather than relying on line endings as record separators. However, the z/os FTP server, when transmitting in text mode, provides the record endings, which retrlines() strips off.
Closing update:
Here's my revised solution, which will be the basis for ongoing development (removing built-in passwords, for example):
import ftplib
import os
from sys import exc_info
sess = ftplib.FTP("undisclosed.server.com", "userid", "password")
sess.sendcmd("site sbd=(IBM-1047,ISO8859-1)")
for dir in ["ASM", "ASML", "ASMM", "C", "CPP", "DLLA", "DLLC", "DLMC", "GEN", "HDR", "MAC"]:
sess.cwd("'ZLTALM.PREP.%s'" % dir)
try:
filelist = sess.nlst()
except ftplib.error_perm as x:
if (x.args[0][:3] != '550'):
raise
else:
try:
os.mkdir(dir)
except:
continue
for hostfile in filelist:
lines = []
sess.retrlines("RETR "+hostfile, lines.append)
pcfile = open("%s/%s"% (dir,hostfile), 'w')
for line in lines:
pcfile.write(line+"\n")
pcfile.close()
print ("Done: " + dir)
sess.quit()
My thanks to both John and Vinay
Just came across this question as I was trying to figure out how to recursively download datasets from z/OS. I've been using a simple python script for years now to download ebcdic files from the mainframe. It effectively just does this:
def writeline(line):
file.write(line + "\n")
file = open(filename, "w")
ftp.retrlines("retr " + filename, writeline)
You should be able to download the file as a binary (using retrbinary) and use the codecs module to convert from EBCDIC to whatever output encoding you want. You should know the specific EBCDIC code page being used on the z/OS system (e.g. cp500). If the files are small, you could even do something like (for a conversion to UTF-8):
file = open(ebcdic_filename, "rb")
data = file.read()
converted = data.decode("cp500").encode("utf8")
file = open(utf8_filename, "wb")
file.write(converted)
file.close()
Update: If you need to use retrlines to get the lines and your lines are coming back in the correct encoding, your approach will not work, because the callback is called once for each line. So in the callback, sequence will be the line, and your for loop will write individual characters in the line to the output, each on its own line. So you probably want to do self.write(sequence + "\r\n") rather than the for loop. It still doesn' feel especially right to subclass file just to add this utility method, though - it probably needs to be in a different class in your bells-and-whistles version.
Your writelineswitheol method appends '\r\n' instead of '\n' and then writes the result to a file opened in text mode. The effect, no matter what platform you are running on, will be an unwanted '\r'. Just append '\n' and you will get the appropriate line ending.
Proper error handling should not be relegated to a "bells and whistles" version. You should set up your callback so that your file open() is in a try/except and retains a reference to the output file handle, your write call is in a try/except, and you have a callback_obj.close() method which you use when retrlines() returns to explicitly file_handle.close() (in a try/except) -- that way you get explict error handling e.g. messages "can't (open|write to|close) file X because Y" AND you save having to think about when your files are going to be implicitly closed and whether you risk running out of file handles.
Python 3.x ftplib.FTP.retrlines() should give you str objects which are in effect Unicode strings, and you will need to encode them before you write them -- unless the default encoding is latin1 which would be rather unusual for a Windows box. You should have test files with (1) all possible 256 bytes (2) all bytes that are valid in the expected EBCDIC codepage.
[a few "sanitation" remarks]
You should consider upgrading your Python from 3.0 (a "proof of concept" release) to 3.1.
To facilitate better understanding of your code, use "i" as an identifier only as a sequence index and only if you irredeemably acquired the habit from FORTRAN 3 or more decades ago :-)
Two of the problems discovered so far (appending line terminator to each character, wrong line terminator) would have shown up the first time you tested it.
Use retrlines of ftplib to download file from z/os, each line has no '\n'.
It's different from windows ftp command 'get xxx'.
We can rewrite the function 'retrlines' to 'retrlines_zos' in ftplib.py.
Just copy the whole code of retrlines, and chane the 'callback' line to:
...
callback(line + "\n")
...
I tested and it worked.
you want a lambda function and a callback. Like so:
def writeLineCallback(line, file):
file.write(line + "\n")
ftpcommand = "RETR {}{}{}".format("'",zOsFile,"'")
filename = "newfilename"
with open( filename, 'w' ) as file :
callback_lambda = lambda x: writeLineCallback(x,file)
ftp.retrlines(ftpcommand, callback_lambda)
This will download file 'zOsFile' and write it to 'newfilename'

Categories

Resources