I write a python scripts that after execute some db queries, save the result of that queries on different csv files.
Now, it's mandatory to rename this file with the production's timestamps and so every hour i got new file with new name.
The script run with a task scheduler every hour and after save my csv files I need to run automatically the command prompt and execute some command that includes my csv files name in the path....
Is it possible to run the cmd and paste him the path of csv file like a variable? in python I save the file in this way:
date_time_str_csv1 = now.strftime("%Y%m%d_%H%M_csv1")
I don't know how to write automatically the different file name when i call the cmd
If I understand your question correctly, one solution would be to simply execute the command-line command directly from the Python script.
You can use the subprocess module from Python (as also explained here: How do I execute a program or call a system command?).
This could look like this for example:
csv_file_name = date_time_str_csv1 +".csv"
subprocess.run(["cat", csv_file_name)
You can run a system cmd from within Python using os.system:
import os
os.system('command filename.csv')
Since the argument to os.system is a string, you can build it with your created filename above.
you can try using the subprocess library, and get a list of the files in the folder in an array. This example is using the linux shell:
import subprocess
str = subprocess.check_output('ls', shell=True)
arr = str.decode('utf-8').split('\n')
print(arr)
After this you can iterate to find the newest file and use that one as the variable.
Related
First post so be gentle please.
I have a bash script running on a Linux server which does a daily sftp download of an Excel file. The file is moved to a Windows share.
An additional requirement has arisen in that i'd like to add the number of rows to the filename which is also timestamped so different each day. Ideally at the end before the xlsx extension.
After doing some research it would seem I may be able to do it all in the same script if I use Python and one of the Excel modules. I'm a complete noob in Python but i have done some experimenting and have some working code using the Pandas module.
Here's what i have working in a test spreadsheet with a worksheet named mysheet and counting a column named code.
>>> excel_file = pd.ExcelFile('B:\PythonTest.xlsx')
>>> df = excel_file.parse('mysheet')
>>> df[['code']].count()
code 10
dtype: int64
>>> mycount = df[['code']].count()
>>> print(mycount)
code 10
dtype: int64
>>>
I have 2 questions please.
First how do I pass todays filename into the python script to then do the count on and how do i return this to bash. Also how do i just return the count value e.g 10 in the above example. i dont want column name or dtype passed back.
Thanks in advance.
Assuming we put your python into a separate script file, something like:
# count_script.py
import sys
import pandas as pd
excel_file = pd.ExcelFile(sys.argv[1])
df = excel_file.parse('mysheet')
print(df[['code']].count().at(0))
We could then easily call that script from within the bash script that invoked it in the first place (the one that downloads the file).
TODAYS_FILE="PythonTest.xlsx"
# ...
# Download the file
# ...
# Pass the file into your python script (manipulate the file name to include
# the correct path first, if necessary).
# By printing the output in the python script, the bash subshell (invoking a
# command inside the $(...) will slurp up the output and store it in the COUNT variable.
COUNT=$(python count_script.py "${TODAYS_FILE}")
# this performs a find/replace on $TODAYS_FILE, replacing the ending ".xlsx" with an
# underscore, then the count obtained via pandas, then tacks on a ".xlsx" again at the end.
NEW_FILENAME="${TODAYS_FILE/\.xlsx/_$COUNT}.xlsx"
# Then rename it
mv "${TODAYS_FILE}" "${NEW_FILENAME}"
You can pass command-line arguments to python programs, by invoking them as such:
python3 script.py argument1 argument2 ... argumentn
They can then be accessed within the script using sys.argv. You must import sys before using it. sys.argv[0] is the name of the python script, and the rest are the additional command-line arguments.
Alternatively you may pass it in stdin, which can be read in Python using normal standard input functions like input(). To pass input in stdin, in bash do this:
echo $data_to_pass | python3 script.py
To give output you can write to stdout using print(). Then redirect output in bash, to say, a file:
echo $data_to_pass | python3 script.py > output.txt
To get the count value within Python, you simply need to add .at(0) at the end to get the first value; that is:
df[["code"]].count().at(0)
You can then print() it to send it to bash.
I've got a task to do that is crushing my head. I have five .py documents and I want to make a menu in another .py so I can run any of them by introducing a string inside an input() but don't really see the way to do that and I don't know if there is somehow I can.
I have tried import every file to the 6th file but I don't even know how to start.
I would like it just to be seen as simple as it can sound, but yet I find it really hard.
If you just want to run them, then try this:-
import os
file_path = input("Enter the path of your file = ")
os.system(file_path)
If the file that you are trying to execute is not in the current
directory, i.e. doesn't exist in the same folder as the currently
executing python file, then you have to provide it's full path.
Path Format:-. C:\Users\lmYoona\OneDrive\Desktop\example.py
If the python file you are trying to execute is in the same directory as
the currently executing python file, then abstract name will also
work
Path Format:- example.py
P.S.:- I would only recommend this method if all you want is just to execute the other python file, rather then importing stuff from it.
I have a bunch of .html files in a directory that I am reading into a python program using PyCharm. I am using the (*) star operator in the following way in the parameters field of the run/debug configuration dialog box in PyCharm:
*.html
, but this doesn't work. I get the following error:
IOError: [Errno 2] No such file or directory: '*.html'
at the line where I open the file to read into my program. I think its reading the "*.html" literally as a file name. I'd appreciate your help in teaching me how to use the star operator in this case.
Addendum:
I'm pretty new to Python and Pycharm. I'm running my script using the following configuration options:
Now, I've tried different variations of parameters here, like '*.html', "*.html", and just *.html. I also tried glob.glob('*.html'), but the code takes it literally and thinks that the file name itself is "glob.glob('*.html')" and throws an error. I think this is more of a Pycharm thing than understanding bash or python. I guess what I want is to make Pycharm pass all the files of the directory through that parameters field in the picture. Is there some way for me to specify to Pycharm NOT to consider the string of parameters literally?
The way the files are being handled is by running a for loop through the sys.argv list and calling a function on each file. The function simply uses the open() method to read the contents of the file into a string so I can pull patterns out of the text. Hope that fleshes out the problem a bit better.
Filename expansion is a feature of bash. So if you call your python script from the linux command line, it will work, just like if you would have typed out all of the filenames as arguments to your script. Pycharm doesn't have this feature, so you will have to do that by yourself in your python script using a glob.
import glob
import sys
files = glob.glob(sys.argv[-1])
To keep compatibility between bash and pycharm, you can use something like this:
import glob
globs = ['*.html', '*.css', 'script.js']
files = []
for g in globs:
files.extend(glob.glob(g))
I have multiple arguments so this is what I did to allow for compatibility:
I have an argparse argument that returns an array of image file names. I check it as follows.
images = args["images"]
if len(images) == 1 and '*' in images[0]:
import glob
images = glob.glob(images[0])
I have a directory of CSV files that I want to import into MySQL. There are about 100 files, and doing a manual import is painful.
My command line is this:
mysqlimport -u root -ppassword --local --fields-terminated-by="|" data PUBACC_FR.dat
The files are all of type XX.dat, i.e. AC.dat, CP.dat, etc. I actually rename them first before processing them (via rename 's/^/PUBACC_/' *.dat). Ideally I'd like to be able to accomplish both tasks in one script: Rename the files, then run the command.
From what I've found reading, something like this:
for filename in os.listdir("."):
if filename.endswith("dat"):
os.rename(filename, filename[7:])
Can anyone help me get started with a script that will accomplish this, please? Read the file names, rename them, then for each one run the mysqlimport command?
Thanks!
I suppose something like the python code below could be used:
import subprocess
import os
if __name__ == "__main__":
for f in os.listdir("."):
if (f.endswith(".dat")):
subprocess.call("echo %s" % f, shell=True)
Obviously, you should change the command from echo to your command instead.
See http://docs.python.org/2/library/subprocess.html for more details of using subprocess, or see the possible duplicate.
Is it possible to loop through a set of selected files, process each, and save the output as new files using Apple Automator?
I have a collection of .xls files, and I've gotten Automator to
- Ask for Finder Items
- Open Finder Items
- Convert Format of Excel Files #save each .xls file to a .csv
I've written a python script that accepts a filename as an argument, processes it, and saves it as p_filename in the directory the script's being run from. I'm trying to use Run Shell Script with the /usr/bin/python shell and my python script pasted in.
Some things don't translate too well, though, especially since I'm not sure how it deals with python's open('filename','w') command. It probably doesn't have permissions to create new files, or I'm entering the command incorrectly. I had the idea to instead output the processed file as text, capture it with Automator, and then save it to a new file.
To do so, I tried to use New Text File, but I can't get it to create a new text file for each file selected back in the beginning. Is it possible to loop through all the selected Finder Items?
Why do you want this done in the folder of the script? Or do you mean the folder of the files you are getting from the Finder items? In that case just get the path for each file passed into Python.
When you run open('filename','w') you should thus pass in a full pathname. Probably what's happening is you are actually writing to the root directory rather than where you think you are.
Assuming you are passing your files to the shell command in Automator as arguments then you might have the following:
import sys, os
args = sys.argv[1:]
for a in args:
p = os.path.dirname(a)
mypath = p + "/" + name
f = open(mypath, "w")