Context
Suppose one has a project structure with src.projectname.__main__.py which can be executed using the following command with accompanying arguments:
python -m src.projectname -e mdsa_size3_m1 -v -x
Question
How would one run vulture whilst passing cli arguments to the script on which vulture runs?
Approach I
When I run:
python -m src.projectname -e mdsa_size3_m1 -v -x
It throws the following
usage: vulture [options] [PATH ...]
vulture: error: unrecognized arguments: -e mdsa_size3_m1 -x
because vulture tries to parse the arguments for the script that is being ran.
Notes
I am aware normally one would expect to run vulture on the script and its entirety without narrowing down the scope with arguments. However, in this case the arguments are required to specify the number of runs/duration of the code execution.
One can hack around this issue by temporarily manually hardcoding the args with (for example):
args = parse_cli_args()
args.experiment_settings_name = "mdsa_size3_m1"
args.export_images = True
process_args(args)
assuming one has such an args object, however, I thought perhaps this functionality can be realised using the CLI, without temporarily modifying the code.
Related
I have a shell script that calls various python scripts and passes an argument it got onto the python scripts. However, the python scripts are currently not getting the argument. when i go to sys.argv it instead shows something completely different. I have the code and sys.argv below:
Shell Script (called run_everything.sh) :
#!/bin/bash
source venv/bin/activate
python3 eval_otare_on_otare.py $1
then i have a line in eval_otare_on_otare.pt that prints the arguments passed:
print(sys.argv)
And I get the following list:
['eval_care_on_otare.py', 'run_everything.sh']
what can I do because sys.argv is clearly not getting what I want, I want it to return
['eval_care_on_otare.py', $1]
where $1 is the argument passed
If your activate script messes up the value of $1, save it first.
#!/bin/bash
arg=$1
source ./venv/bin/activate
python3 eval_otare_on_otare.py "$arg"
Tangentially note also When to wrap quotes around a shell variable?
I am trying to run python code that I pull directly from Github raw URL using the Python interpreter. The goal is never having to keep the code stored on file system and run it directly from github.
SO far I am able to get the raw code from github using the curl command but since it is a multi-line code, I get the error that python cannot find the file.
python 'curl https://github.url/raw/path-to-code'
python: can't open file 'curl https://github.url/raw/path-to-code': [Errno
2] No such file or directory
How do I pass a multi-line code block to the Python interpreter without having to write another .py file (which would defeat the purpose of this exercise)?
You need to pipe the code you get from cURL to the Python interpreter, something like:
curl https://github.url/raw/path-to-code | python -
UPDATE: cURL prints download stats to STDERR, if you want it silenced you can use the -s modifier when calling it:
curl -s https://github.url/raw/path-to-code | python -
There is no way to do this via Python interpreter, without first retrieving the script then passing it to the interpreter.
The currant Python command line arguments can be accessed with --help argument:
usage: python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-b : issue warnings about str(bytes_instance), str(bytearray_instance)
and comparing bytes/bytearray with str. (-bb: issue errors)
-B : don't write .pyc files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
-I : isolate Python from the user's environment (implies -E and -s)
-m mod : run library module as a script (terminates option list)
-O : optimize generated bytecode slightly; also PYTHONOPTIMIZE=x
-OO : remove doc-strings in addition to the -O optimizations
-q : don't print version and copyright messages on interactive startup
-s : don't add user site directory to sys.path; also PYTHONNOUSERSITE
-S : don't imply 'import site' on initialization
-u : force the binary I/O layers of stdout and stderr to be unbuffered;
stdin is always buffered; text I/O layer will be line-buffered;
also PYTHONUNBUFFERED=x
-v : verbose (trace import statements); also PYTHONVERBOSE=x
can be supplied multiple times to increase verbosity
-V : print the Python version number and exit (also --version)
when given twice, print more information about the build
-W arg : warning control; arg is action:message:category:module:lineno
also PYTHONWARNINGS=arg
-x : skip first line of source, allowing use of non-Unix forms of #!cmd
-X opt : set implementation-specific option
file : program read from script file
- : program read from stdin (default; interactive mode if a tty)
arg ...: arguments passed to program in sys.argv[1:]
If you want it all on one line then use | to set multiple commands
curl https://github.url/raw/path-to-code --output some.file|python some.file
I have an ECS task setup which, when with a Command override ls, produces expected results with my CloudWatch log stream: test.py. my script test.py takes one parameter. I am wondering how I can execute this script with python3 (which exists in my container) using the command override. Essentially, I want to execute the command:
python3 test.py hello
how can I do this?
Here's how I did something similar:
In your docker build file, make the command you want to run as the last instruction. In your case:
CMD python3 test.py hello
To make it more extensible, use environment variables. For instance, do something like:
CMD ["python3", "test.py"]
But make the parameter come from an environment variable you pass into the container definition in your task.
Deployment by using Python throws error:
I used Python code ( its your deploy.py) to deploy our proxy (our company proxy) into apigee platform. i read http://apigee.com/docs/api-services/content/deploying-proxies-command-line
but it throws error when i run "python api-platform-samples-master/tools/deploy.py -n apikey -u "yusuf.karatoprak#mobgen.com:Welcome#2014" -o yusufkaratoprak123 -e test -p / -d sample-proxies"
i would like to solve this situation. i added to python code it is not working. it throws me Error: name 'ZipFile' is not defined
The -d flag value needs to point to the directory that contains the /apiproxy directory for the sample you want to deploy. (In your command above, it appears that you are pointing at /sample-proxies, rather than, for example, /sample-proxies/apikey
Try using the deploy scripts. There is one in each sample proxy directory. There's a also a script, /setup/deploy_all.sh if you want to deploy all sample proxies.
Make sure you update /setup/setenv.sh before running the deploy scripts.
The error is in how you are calling it from the command line. You have a space in one of the parameters you pass in, which needs to be put inside of quotes. Turn -u yusuf karatoprak:123 into -u "yusuf karatoprak:123"
Fixed command line call:
python api-platform-samples-master/tools/deploy.py -n weatherapi -u "yusuf karatoprak:123" -o yk123 -e test -p / -d simpleProxy
In my python script myscript.py I use argparse to pass command-line arguments. When I want to display the help information about the input arguments, I just do:
$ python myscript.py --help
If instead I want to use ipython to run my script, the help message won't be displayed. Ipython will display its own help information:
$ ipython -- myscript.py -h
=========
IPython
=========
Tools for Interactive Computing in Python
=========================================
A Python shell with automatic history (input and output), dynamic object
introspection, easier configuration, command completion, access to the
system shell and more. IPython can also be embedded in running programs.
Usage
ipython [subcommand] [options] [files]
It's not so annoying, but is there a way around it?
You need to run your .py script inside the ipython. Something like that:
%run script.py -h
This is an IPython bug, corrected in https://github.com/ipython/ipython/pull/2663.
My 0.13 has this error; it is corrected in 0.13.2. The fix is in IPthyon/config/application.py Application.parse_command_line. This function looks for help and version flags (-h,-V) in sys.argv before passing things on to parse_known_args (hence the custom help formatting). In the corrected release, it checks sys.argv only up to the first --. Before it looked in the whole array.
earlier:
A fix for earlier releases is to define an alternate help flag in the script:
simple.py script:
import argparse, sys
print(sys.argv)
p = argparse.ArgumentParser(add_help=False) # turn off the regular -h
p.add_argument('-t')
p.add_argument('-a','--ayuda',action=argparse._HelpAction,help='alternate help')
print(p.parse_args())
Invoke with:
$ ./ipython3 -- simple.py -a
['/home/paul/mypy/argdev/simple.py', '-a']
usage: simple.py [-t T] [-a]
optional arguments:
-t T
-a, --ayuda alternate help
$ ./ipython3 -- simple.py -t test
['/home/paul/mypy/argdev/simple.py', '-t', 'test']
Namespace(t='test')