$ python --version
Python 3.6.8
I've written a script which has some command-line arguments. Initially, these worked without issue:
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
'-log',
'--loglevel',
default = 'info'
)
arg_parser.add_argument(
'-lf',
'--logfile',
default = './logs/populate.log'
)
...
cl_options = arg_parser.parse_args()
...
I then changed the name of the "-log" short flag, and added another flag:
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument(
'-ll',
'--loglevel',
default = 'info'
)
arg_parser.add_argument(
'-lf',
'--logfile',
default = './logs/populate.log'
)
arg_parser.add_argument(
'-d',
'--daemon',
action = 'store_true'
)
...
cl_options = arg_parser.parse_args()
...
When running the script now, the initial set of arguments are still used - the name of the "-log" flag is the same and it is missing the "-d/--daemon" flag when run:
$ python3 populate.py --daemon
usage: populate.py [-h] [-log LOGLEVEL] [-lf LOGFILE]
populate.py: error: unrecognized arguments: --daemon
Things I have tried:
make sure I have checked out the proper git branch
delete the pycache folder
reboot the machine the script runs on
use the reload() option for argparse
If I look at the contents of the script I can see that the changes I've made are there, but they refuse to take effect.
I'm not a Python expert, I'm still learning, but I must be doing something wrong here. Can anyone point me in the right direction?
Thanks!
EDIT:
I have verified as well as I can that the script is using the most current files:
Remote System (where script is running):
$ pwd
/opt/ise-web-rpt
$ ls populate.py
populate.py
$ git branch
* develop
main
$ sha256sum populate.py
2601cbb49f6956611e2ff50a1b1b90ba61c9c0686ed199831d671e682492be4b populate.py
Local System (where development happens):
$ git branch
* develop
main
$ sha256sum populate.py
2601cbb49f6956611e2ff50a1b1b90ba61c9c0686ed199831d671e682492be4b populate.py
As far as I can tell the script is the correct file and I'm on the correct branch in Git.
Stepping through this in pdb, it appears this was caused by importing another Python file in the populate.py script.
Both files had argparse configured the exact same way, so initially, there was no problem. When I added the new parameter to populate.py, the second file that was imported didn't have this parameter added, so it was "unrecognized" to the imported Python file. That's also why the flag names didn't appear to change - it was returning the names from the imported file, not the one I was trying to run. I added the new parameter to the args list in the second file and the script(s) were able to run.
I now need to figure out how hierarchy works for argparse, but that's a separate issue. Thanks everyone for the input.
Related
Context
Suppose one has a project structure with src.projectname.__main__.py which can be executed using the following command with accompanying arguments:
python -m src.projectname -e mdsa_size3_m1 -v -x
Question
How would one run vulture whilst passing cli arguments to the script on which vulture runs?
Approach I
When I run:
python -m src.projectname -e mdsa_size3_m1 -v -x
It throws the following
usage: vulture [options] [PATH ...]
vulture: error: unrecognized arguments: -e mdsa_size3_m1 -x
because vulture tries to parse the arguments for the script that is being ran.
Notes
I am aware normally one would expect to run vulture on the script and its entirety without narrowing down the scope with arguments. However, in this case the arguments are required to specify the number of runs/duration of the code execution.
One can hack around this issue by temporarily manually hardcoding the args with (for example):
args = parse_cli_args()
args.experiment_settings_name = "mdsa_size3_m1"
args.export_images = True
process_args(args)
assuming one has such an args object, however, I thought perhaps this functionality can be realised using the CLI, without temporarily modifying the code.
I have a python script that queries a database. I run it from the terminal with python3 myscript.py
I've added a cron task for it in my crontab file
*/30 9-17 * * 1-5 python3 /path/to/my/python/script\ directory\ space/myscript.py
The script imports a function in the same directory that parses login info for a database located in database.ini in the same directory. The database.ini is:
[postgresql]
host=my-db-host-1-link.11.thedatabase.com
database=dbname
user=username
password=password
port=10898
But currently cron outputs to the file in my mail folder:
Section postgresql not found in the database.ini file
The section is clearly present in the database.ini file, so what am I missing here?
Instead of running "python3 myscript.py" in the directory where it is present, try running it from some other directory (like home directory). Most likely you will see the same issue.
Note that cron's current-working-directory is different on different systems. So, the safest method is to explicitly switch to the directory where your script is and run the command there:
cd /path/to/my/python/script\ directory\ space/ && python3 myscript.py
Try this:
import os
...
change --> filename=database.ini
for --------> filename=os.path.dirname(__file__)+'/database.ini'
I want to execute a shell script without having to specify any additional arguments on the command line itself. Instead I would like to hard code the arguments, e.g. input file name and file path, in the shell script.
Toy shell script:
#!/bin/bash
time python3 path/to/pyscript/graph.py \
--input-file-path=path/to/file/myfile.tsv
So, when I run $ ./script.sh, the script should pass the input file information to the py script.
Can this be done? I invariably get the error "no such directory or file" ...
Note, I will deal with the arguments on the python script side using argparse.
EDIT
It turns out that the issue was caused by something I had omitted from my toy script above because I didn't think that it could be the cause. In fact I had a line commented out and it was this line which prevented the script from running.
Toy shell script Full Version:
#!/bin/bash
time python3 path/to/pyscript/graph.py \
# this commented out line prevents the script from running
--input-file-path=path/to/file/myfile.tsv
I suspect your script is correct but the file path is wrong. Maybe you forgot a leading forward slash. Anyway, make sure that path/to/pyscript/graph.py and path/to/file/myfile.tsv are correct.
A dummy example of how to call a python script with hard-coded arguments from a BASH script:
$ cat dummy_script.py
import argparse
import os
import time
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input-file-path")
args = parser.parse_args()
if os.path.isfile(args.input_file_path):
print args.input_file_path, "is a file"
print "sleeping a second"
time.sleep(1)
$ cat time_python_script.sh
time python dummy_script.py --input-file-path=/etc/passwd
$ /bin/bash time_python_script.sh
/etc/passwd is a file
sleeping a second
real 0m1.047s
user 0m0.028s
sys 0m0.016s
I have a python script which I want to start using a rc(8) script in FreeBSD. The python script uses the #!/usr/bin/env python2 pattern for portability purposes. (Different *nix's put interpreter binaries in different locations on the filesystem).
The FreeBSD rc scripts will not work with this.
Here is a script that sets up a test scenario that demonstrates this:
#!/bin/sh
# Create dummy python script which uses env for shebang.
cat << EOF > /usr/local/bin/foo
#!/usr/bin/env python2.7
print("Hello foo")
EOF
# turn on executable bit
chmod +x /usr/local/bin/foo
# create FreeBSD rc script with command_interpreter specified.
cat << EOF > /usr/local/etc/rc.d/foo
#!/bin/sh
#
# PROVIDE: foo
. /etc/rc.subr
name="foo"
rcvar=foo_enable
command_interpreter="/usr/local/bin/python2.7"
command="/usr/local/bin/foo"
load_rc_config \$name
run_rc_command \$1
EOF
# turn on executable bit
chmod +x /usr/local/etc/rc.d/foo
# enable foo
echo "foo_enable=\"YES\"" >> /etc/rc.conf
Here follows a console log demonstrating the behaviour when executing the rc script directly. Note this works, but emits a warning.
# /usr/local/etc/rc.d/foo start
/usr/local/etc/rc.d/foo: WARNING: $command_interpreter /usr/local/bin/python2 != python2
Starting foo.
Hello foo
#
Here follows a console log demonstrating the behaviour when executing the rc script using the service(8) command. This fails completely.
# service foo start
/usr/local/etc/rc.d/foo: WARNING: $command_interpreter /usr/local/bin/python2 != python2
Starting foo.
env: python2: No such file or directory
/usr/local/etc/rc.d/foo: WARNING: failed to start foo
#
Why does the service foo start fail?
Why does rc warn about the interpreter? Why does it not use the interpreter as specified in the command_interpreter variable?
Self answered my question, but I'm hoping someone else will give a better answer for posterity
The reason why env(1) does not work is because it expects an environment in the first place, but rc scripts run before the environment is set up. Hence it fails. It seems that the popular env shebang pattern is actually an anti-pattern.
I do not have a cogent answer for the command_interpreter warning.
The command interpreter warning is generated by the _find_processes() function in /usr/src/etc/rc.subr.
The reason that it does that is because a service written in an interpreted language is found in ps output by the name of the interpreter.
I try to use supervisor with perlbrew, but I can not make it work. For perlbrew I just tried to set the environment variable that go well, but perhaps it is better to make a script that launches perlbrew and plackup, this my configuration file:
[program:MahewinSimpleBlog]
command = perlbrew use perl-5.14.2 && plackup -E deployment -s Starman --workers=10 -p 4000 -a bin/app.pl -D
directory = /home/hobbestigrou/MahewinSimpleBlog
environment = PERL5LIB ='/home/hobbestigrou/MahewinBlogEngine/lib',PERLBREW_ROOT='/home/hobbestigrou/perl5/perlbrew',PATH='/home/hobbestigrou/perl5/perlbrew/bin:/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games',MANPATH='/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/man:',PERLBREW_VERSION='0.43',PERLBREW_PERL='perl-5.14.2',PERLBREW_MANPATH='/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/man',PERLBREW_SKIP_INIT='1',PERLBREW_PATH='/home/hobbestigrou/perl5/perlbrew/bin:/home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/bin',SHLVL='2'
user = hobbestigrou
stdout_file = /home/hobbestigrou/mahewinsimpleblog.log
autostart = true
In the log I see it's not looking at the right place:
Error while loading bin/app.pl: Can't locate Type/Params.pm in #INC (#INC contains: /home/hobbestigrou/MahewinSimpleBlog/lib /home/hobbestigrou/MahewinBlogEngine/lib /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /home/hobbestigrou/MahewinBlogEngine/lib/MahewinBlogEngine/Article.pm line 5.
I do not see the problem, maybe perlbrew use done other things
When you installed perlbrew, you added a command to your .bashrc. You're getting that message because that command wasn't run for the shell in question because it's not an interactive shell.
Why don't you explicitly use /home/hobbestigrou/perl5/perlbrew/perls/perl-5.14.2/bin/perl instead of using perlbrew use?