I have composed an ArcPy script which is run via a windows scheduler.The same script is loaded into a script tool so a user can run the process manually. I've used: get parameters as text, with or's and not's, to hard-wire the standard variables if they are not speicifed.
ReportFolder = arcpy.GetParameterAsText(0)
if ReportFolder == '#' or not ReportFolder:
ReportFolder = "C:\\Data\\GIS"
The process runs and during so writes to a text file log, for example:
txtFile.write("= For ArcGIS 10.3.1: Date: "+str(timed)),txtFile.write ('\n')
I'd like to record what method was used to execute the script; was it via the windows scheduler, or by the script tool via ArcGIS, or by a python client like PyScripter.
Is anyone aware of some form of os environment thingy that can be called by Python?
Related
Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.
I am trying to run external sample.py script in /path-to-scollector/collectors/0 folder from scollector.
scollector.toml:
Host = "localhost:0"
ColDir="//path-to-scollector//collectors//"
BatchSize=500
DisableSelf=true
command to run scollector:
scollector-windows-amd64.exe -conf scollector.toml -p
But I am not getting the sample.py metrics in the output. It is expected to run continuosly and print output to cnosole. Also when I am running:
scollector-windows-amd64.exe -conf scollector.toml -l
my external collector is not listed.
In your scollector.toml, You should one line as below,
Filter=["sample.py "] .
in your sample.py, you need this line
#!/usr/bin/python
For running scollector on linux machine the above solution works well. But with windows its a bit tricky. Since scollector running on windows can only identify batch files. So we need to do a little extra work for windows.
create external collector :-
It can be written in any language python,java etc. It contains the main code to get the data and print to console.
Example my_external_collector.py
create a wrapper batch script :-
wrapper_external_collector.bat.
Trigger my_external_collector.py inside wrapper_external_collector.bat.
python path_to_external/my_external_collector.py
You can pass arguments to the script also.Only disadvantage is we need to maintain two scripts.
I want to run some command line scripts from within my python program. These scripts generates some output files. I want to grab these output files from the subprocess call as object in my python program, while canceling generation of files on disk. Problem is I don't know how to do it, or whether that is even possible.
A simple example would look like this:
#foo.py
fout1 = open("temp1.txt","w")
fout2 = open("temp2.txt","w")
fout1.write("fout1")
fout2.write("fout2")
fout1.close()
fout2.close()
#test.py
import subprocess
process = subprocess.Popen(["python","foo.py"], ????????) #what arguments to use to grab temp1.txt and temp2.txt
print(process.??????) #how to access those files
I am familiar with subprocess.Popen so that is what the example code uses, but I am open to the use of other modules too if they could do it.
So I am creating an application that can connect printers with a Python GUI that runs PowerShell scripts in the background. I was wondering if there was a way I could pass a variable inputted from a Python widget into a PowerShell script that is being invoked by Python. This variable would be the name of the printer that I could specify in Python so that I do not have to create separate scripts for each printer.
My code in Python that calls upon the PS script:
def connect():
if self.printerOpts.get() == 'Chosen Printer':
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",'-ExecutionPolicy','Unrestricted', '.\'./ScriptName\';'])
PS script that connects printer to computer:
Add-Printer -ConnectionName \\server\printer -AsJob
Basically, I am wondering if I can pass a variable from Python into the "printer" part of my PS script so that I do not have to create a different script for each printer that I would like to add.
A better way to do this would be completely in PowerShell or complete in Python.
What you're after is doable. You can pass it in the same way that you have passed -ExecutionPolicy Unrestricted, by ensuring that the PowerShell script is expecting the variable.
My Python is non-existant so please bear with if that part doesn't work.
Python
myPrinter # string variable in Python with printer name
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",'-ExecutionPolicy','Unrestricted', '.\'./ScriptName\';','-printer',myPrinter])
PowerShell
param(
$printer
)
Add-Printer -ConnectionName \\server\$printer -AsJob
The way that worked for me was first to specify that I was passing a variable as a string in my PS script:
param([string]$path)
Add-Printer -ConnectionName \\server\$path
My PS script was not expecting this variable. In my Python script I had to first define the my variable which named path as a string and then input "path" into the end of my subprocess function.
path = "c"
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",'-ExecutionPolicy','Unrestricted', 'Script.ps1', path])
I'm trying to copy thousands files to a remote server. These files are generated in real-time within the script. I'm working on a Windows system and need to copy the files to a Linux server (hence the escaping).
I currently have:
import os
os.system("winscp.exe /console /command \"option batch on\" \"option confirm off\" \"open user:pass#host\" \"put f1.txt /remote/dest/\"")
I'm using Python to generate the files but need a way to persist the remote connection so that I can copy each file, to the server, as it is generated (as opposed to creating a new connection each time). That way, I'll only need to change the field in the put option thus:
"put f2 /remote/dest"
"put f3 /remote/dest"
etc.
I needed to do this and found that code similar to this worked well:
from subprocess import Popen, PIPE
WINSCP = r'c:\<path to>\winscp.com'
class UploadFailed(Exception):
pass
def upload_files(host, user, passwd, files):
cmds = ['option batch abort', 'option confirm off']
cmds.append('open sftp://{user}:{passwd}#{host}/'.format(host=host, user=user, passwd=passwd))
cmds.append('put {} ./'.format(' '.join(files)))
cmds.append('exit\n')
with Popen(WINSCP, stdin=PIPE, stdout=PIPE, stderr=PIPE,
universal_newlines=True) as winscp: #might need shell = True here
stdout, stderr = winscp.communicate('\n'.join(cmds))
if winscp.returncode:
# WinSCP returns 0 for success, so upload failed
raise UploadFailed
This is simplified (and using Python 3), but you get the idea.
Instead of using an external program (winscp) you could also use an python ssh-library like pyssh.
You would have to start persistent WinSCP sub-process in Python and feed the put commands to its standard input continuously.
I do not have Python example for this, but there's an equivalent JScript example:
https://winscp.net/eng/docs/guide_automation_advanced#inout
or C# example:
https://winscp.net/eng/docs/guide_dotnet#input
Though using WinSCP .NET assembly via its COM interface for Python would be a way easier:
https://winscp.net/eng/docs/library