Python not writing out - python

I have two questions about my script, how can I get it to output to the file I requested. I ask this because it loops infinitely and when I cancel the script and show the file it is empty. Also, how can I use the variables assigned if I must cancel the script to input anything? Thanks!
import subprocess
import datetime
#open results file and assign to results variable, add append rights
results = open("results.txt", "a")
#Run until stopped
while 1:
#split the blah variable by line
#Run tshark command 100 times, then restart script. Assign to blah variable
blah = subprocess.check_output(["tshark -i mon0 -f \"subtype probe-req\" -T fields -e wlan.sa -e wlan_mgt.ssid -c 20"], shell=True)
splitblah = blah.split("\n")
#repeat for each line, ignore first line since it contains headers
for value in splitblah[:-1]:
#split each line by tab delimiter
splitvalue = value.split("\t")
#Assign variables to split fields
MAC = str(splitvalue[1])
SSID = str(splitvalue[2])
time = str(datetime.datetime.now())
#write and format output to results file
Results.write(MAC+" "+SSID+" "+time+"\r\n")

You should put a condition in your while statement, or the program will (indeed) never stop.
Also, datas are not necessarily written on the disk immediately after someFileObject.write function call, you need to call someFileObject.flush to insure that.

Related

How to assign variable from a shell command to python script

I am trying to run a batch process using array in slurm. I only know shell command to extract variable from array (text files), but failed to assign it as Python variable.
I have to assign a variable to a Python slurm script. I used a shell command to extract values from the array. but facing errors while assigning it to the variable. I used subprocess, os.system and os.popen.
or is there any way to extract values from text file to be used as a Python variable?
start_date = os.system('$(cat startdate.txt | sed -n ${SLURM_ARRAY_TASK_ID}p)')
start_date = subprocess.check_output("$(cat startdate.txt | sed -n ${SLURM_ARRAY_TASK_ID}p)", shell=True)
start_date = os.popen('$(cat startdate.txt | sed -n ${SLURM_ARRAY_TASK_ID}p)').read()
start_date = '07-24-2004'
Don't use $(...). That will execute the command, and then try to execute the output of the command. You want the output to be sent back to python, not re-executed by the shell.
start_date = subprocess.check_output("cat startdate.txt | sed -n ${SLURM_ARRAY_TASK_ID}p", shell=True)
Barmar is correct, the $(...) part is why you are not getting what you want, but the real question is why when you are using python would you want to use cat and sed as well. Just open the file and pull out the information you want
import os
with open("startdate.txt", "r") as fh:
lines = fh.readlines()
start_date = lines[os.environ['SLURM_ARRAY_TASK_ID']].strip()
the .strip() part gets rid of the newline character.

Is there a way to have Windows task scheduler automatically respond to input() in Python script?

I'm trying to schedule a python script to run automatically on a Windows 10 machine. The script, when run alone, prompts the user for some input to use as it runs. I'd like to automatically set these inputs when the scheduler runs the .bat file. As an example:
test.py:
def main():
name = input('What is your name? ')
print(f'Hello, {name}. How are you today?')
main()
This works fine if I just run the script, but ideally I'd like to have the name variable passed to it from the .bat file.
test.bat:
"path\to\python.exe" "path\to\test.py"
pause
Any help would be greatly appreciated!
If you just want to give a single fixed input, you can do it like:
REM If you add extra spaces before `|` those will be passed to the program
ECHO name_for_python| "path\to\python.exe" "path\to\test.py"
Unfortunately, there is no good way of extending this to multiple lines. You would use a file containing the lines you want to input for that:
"path\to\python.exe" "path\to\test.py" < file_with_inputs.txt
If you want to have everything into a standalone script, you may do something like this:
REM Choose some path for a temporary file
SET temp_file=%TEMP%\input_for_my_script
REM Write input lines to file, use > for first line to make sure file is cleared
ECHO input line 1> %temp_file%
REM Use >> for remaining lines to append to file
ECHO input line 2>> %temp_file%
ECHO input line 3>> %temp_file%
REM Call program with input file
"path\to\python.exe" "path\to\test.py" < file_with_inputs.txt
REM Delete the temporary file
DEL %temp_file% /q
Obviously, this is assuming you cannot use the standard sys.argv (or extensions like argparse), which would be the more standard and convenient way to send arguments to a script.

Calling configuration file ID into Linux Command with Date Time from Python

I'm trying to write a script to get the following outputs to a folder (YYYYMMDDHHMMSS = current date and time) using a Linux command in Python, with the ID's in a configutation file
1234_YYYYMMDDHHMMSS.txt
12345_YYYYMMDDHHMMSS.txt
12346_YYYYMMDDHHMMSS.txt
I have a config file with the list of ID's
id1 = 1234
id2 = 12345
id3 = 123456
I want to be able to loop through these in python and incorporate them into a linux command.
Currently, my linux commands are hardcoded in python as such
import subprocess
import datetime
now = datetime.datetime.now()
subprocess.call('autorep -J 1234* -q > /home/test/output/1234.txt', shell=True)
subprocess.call('autorep -J 12345* -q > /home/test/output/12345.txt', shell=True)
subprocess.call('autorep -J 123456* -q > /home/test/output/123456.txt', shell=True)
print now.strftime("%Y%m%d%H%M%S")
The datetime is defined, but doesn't do anything currently, except print it to the console, when I want to incorporate it into the output txt file. However, I want to be able to write a loop to do something like this
subprocess.call('autorep -J id1* -q > /home/test/output/123456._now.strftime("%Y%m%d%H%M%S").txt', shell=True)
subprocess.call('autorep -J id2* -q > /home/test/output/123456._now.strftime("%Y%m%d%H%M%S").txt', shell=True)
subprocess.call('autorep -J id3* -q > /home/test/output/123456._now.strftime("%Y%m%d%H%M%S").txt', shell=True)
I know that I need to use ConfigParser and currently have been this piece written which simply prints the ID's from the configuration file to the console.
from ConfigParser import SafeConfigParser
import os
parser = SafeConfigParser()
parser.read("/home/test/input/ReportConfig.txt")
def getSystemID():
for section_name in parser.sections():
print
for key, value in parser.items(section_name):
print '%s = %s' % (key,value)
print
getSystemID()
But as mentioned in the beggining of the post, my goal is to be able to loop through the ID's, and incorporate them into my linux command while adding the datetime format to the end of the file. I'm thinking all I need is some kind of while loop in the above function in order to get the type of output I want. However, I'm not sure how to call the ID's and the datetime into a linux command.
So far you have most of what you need, you are just missing a few things.
First, I think using ConfigParser is overkill for this. But it's simple enough so lets continue with it. Lets change getSystemID to a generator returning your IDs instead of printing them out, its just a one line change.
parser = SafeConfigParser()
parser.read('mycfg.txt')
def getSystemID():
for section_name in parser.sections():
for key, value in parser.items(section_name):
yield key, value
With a generator we can use getSystemID in a loop directly, now we need to pass this on to the subprocess call.
# This is the string of the current time, what we add to the filename
now = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
# Notice we can iterate over ids / idnumbers directly
for name, number in getSystemID():
print name, number
Now we need to build the subprocess call. The bulk of your problem above was knowing how to format strings, the syntax is described here.
I'm also going to make two notes on how you use subprocess.call. First, pass a list of arguments instead of a long string. This helps python know what arguments to quote so you don't have to worry about it. You can read about it in the subprocess and shlex documentation.
Second, you redirect the output using > in the command and (as you noticed) need shell=True for this to work. Python can redirect for you, and you should use it.
To pick up where I left off above in the foor loop.
for name, number in getSystemID():
# Make the filename to write to
outfile = '/home/test/output/{0}_{1}.txt'.format(number, now)
# open the file for writing
with open(outfile, 'w') as f:
# notice the arguments are in a list
# stdout=f redirects output to the file f named outfile
subprocess.call(['autorep', '-J', name + '*', '-q'], stdout=f)
You can insert the datetime using Python's format instruction.
For example, you could create a new file with the 1234 prefix and the datime stamp like this:
new_file = open("123456.{0}".format(datetime.datetime.now()), 'w+')
I am not sure if I understood what your are looking for, but I hope this helps.

bash script does not read lines when called form python script

I have two simple scripts - I am trying to pass some information (date as input into the python script) to the bash script. Here's the python one:
#!/usr/local/bin/python
import os
import sys
import subprocess
year = "2012"
month = "5"
month_name = "may"
file = open('date.tmp','w')
file.write(year + "\n")
file.write(month + "\n")
file.write(month_name + "\n")
file.close
subprocess.call("/home/lukasz/bashdate.sh")
And here's the bash one:
#!/bin/bash
cat /home/lukasz/date.tmp | \
while read CMD; do
echo -e $CMD
done
rm /home/lukasz/date.tmp
Python script works fine without issues. It calls the bash script but it looks like the while loop just does not run. I know the bash script does run overall because the rm command gets executed and the date.tmp file is removed. However if I comment out the subprocess call in python then run the bash script manually it works fine displaying each line.
Brief explanation of what I am trying to accomplish. I have a python script that exports a very large DB to CSV (almost 300 tables and a few gigs of data) which then calls the bash script to zip the CSVs into one file and move it to another location. I need to pass the month and year supplied to the python script to the bash script.
I believe that you need file.close() instead of file.close. With the latter, you're not actually closing the file since you don't call the method. Since you haven't actually closed the file yet, it might not be flushed and so the entire contents of the file might be buffered rather than written to disk.
As a side note, these things are taken care of automatically if you use a context manager:
with open('foofile','w') as fout:
fout.write("this data")
fout.write("that data")
#Sleep well tonight knowing that python guarantees your file is closed properly
do_more_stuff(blah,foo,bar,baz,qux)
Instead of writing a temp file, send the values of year, month, and month-name to the bash script as parameters. Ie, in the Python code remove all the lines with file in them, and replace
subprocess.call("/home/lukasz/bashdate.sh")
with
subprocess.call(['/home/lukasz/bashdate.sh', year, month, month_name])
and in the bash script, replace the cat ... rm lines with (eg)
y=$1; m=$2; mn=$3
which puts the year, month, and month-name into shell variables y, m, and mn.
Maybe try adding shell=True to the call:
subprocess.call("/home/lukasz/bashdate.sh", shell=True)

Python subprocess to call Unix commands, a question about how output is stored

I am writing a python script that reads a line/string, calls Unix, uses grep to search a query file for lines that contain the string, and then prints the results.
from subprocess import call
for line in infilelines:
output = call(["grep", line, "path/to/query/file"])
print output
print line`
When I look at my results printed to the screen, I will get a list of matching strings from the query file, but I will also get "1" and "0" integers as output, and line is never printed to the screen. I expect to get the lines from the query file that match my string, followed by the string that I used in my search.
call returns the process return code.
If using Python 2.7, use check_output.
from subprocess import check_output
output = check_output(["grep", line, "path/to/query/file"])
If using anything before that, use communicate.
import subprocess
process = subprocess.Popen(["grep", line, "path/to/query/file"], stdout=subprocess.PIPE)
output = process.communicate()[0]
This will open a pipe for stdout that you can read with communicate. If you want stderr too, you need to add "stderr=subprocess.PIPE" too.
This will return the full output. If you want to parse it into separate lines, use split.
output.split('\n')
I believe Python takes care of line-ending conversions for you, but since you're using grep I'm going to assume you're on Unix where the line-ending is \n anyway.
http://docs.python.org/library/subprocess.html#subprocess.check_output
The following code works with Python >= 2.5:
from commands import getoutput
output = getoutput('grep %s path/to/query/file' % line)
output_list = output.splitlines()
Why would you want to execute a call to external grep when Python itself can do it? This is extra overhead and your code will then be dependent on grep being installed. This is how you do simple grep in Python with "in" operator.
query=open("/path/to/query/file").readlines()
query=[ i.rstrip() for i in query ]
f=open("file")
for line in f:
if "line" in query:
print line.rstrip()
f.close()

Categories

Resources