I have used both Python and C for a while. C is good in a way that i can use Windows cmd or anything like that to compile files and easily read command line arguments. However, the only thing that runs python that I know is IDLE which is like an interpreter and doesnt take command-line arguments and it's hard to work with. Is there anything like the C's cmd and a compiler for python 3.x?
Thanks
However, the only thing that runs python that I know is IDLE which is
like an interpreter
You can still call python helloworld.py from a command line
and doesnt take command-line arguments
It's possible to read commandline arguments from python helloworld.py Alex using:
import sys
name = sys.argv[1] # Gives "Alex", argv[0] would be "helloworld.py"
a compiler for python 3.x
py2exe supports Python 3
And finally if you're looking to call commands from your Python code, there is a module called subprocess
if i understand your question , you can do this in python by importing cmd , os
for example :
import os
import cmd
import readline
class Console(cmd.Cmd):
def __init__(self):
cmd.Cmd.__init__(self)
self.prompt = "=>> "
self.intro = "Welcome to console!" ## defaults to None
## Command definitions ##
def do_hist(self, args):
"""Print a list of commands that have been entered"""
print self._hist
def do_exit(self, args):
"""Exits from the console"""
return -1
## Command definitions to support Cmd object functionality ##
def do_EOF(self, args):
"""Exit on system end of file character"""
return self.do_exit(args)
def do_shell(self, args):
"""Pass command to a system shell when line begins with '!'"""
os.system(args)
def do_help(self, args):
"""Get help on commands
'help' or '?' with no arguments prints a list of commands for which help is available
'help <command>' or '? <command>' gives help on <command>
"""
## The only reason to define this method is for the help text in the doc string
cmd.Cmd.do_help(self, args)
## Override methods in Cmd object ##
def preloop(self):
"""Initialization before prompting user for commands.
Despite the claims in the Cmd documentaion, Cmd.preloop() is not a stub.
"""
cmd.Cmd.preloop(self) ## sets up command completion
self._hist = [] ## No history yet
self._locals = {} ## Initialize execution namespace for user
self._globals = {}
def postloop(self):
"""Take care of any unfinished business.
Despite the claims in the Cmd documentaion, Cmd.postloop() is not a stub.
"""
cmd.Cmd.postloop(self) ## Clean up command completion
print "Exiting..."
def precmd(self, line):
""" This method is called after the line has been input but before
it has been interpreted. If you want to modifdy the input line
before execution (for example, variable substitution) do it here.
"""
self._hist += [ line.strip() ]
return line
def postcmd(self, stop, line):
"""If you want to stop the console, return something that evaluates to true.
If you want to do some post command processing, do it here.
"""
return stop
def emptyline(self):
"""Do nothing on empty input line"""
pass
def default(self, line):
"""Called on an input line when the command prefix is not recognized.
In that case we execute the line as Python code.
"""
try:
exec(line) in self._locals, self._globals
except Exception, e:
print e.__class__, ":", e
if __name__ == '__main__':
console = Console()
console . cmdloop()
this example is for use command lines in python , however you can write your python code and call .py file in cmd by run this command :
python <file_name>.py
search more for other examples , also see official doc : cmd — Support for line-oriented command interpreters
You can use the python interpreter as a compiler too to compile your python programs.
Say you have a test.py file which you want to compile; then you can use python test.py to compile the file.
To be true, you are not actually compiling the file, you are executing it line by line (well, call it interpreting)
For command line arguments you can use sys.argv as already mentioned in the above answers.
Provided on how you have it installed, you can probably just run the python scripts as is, by typing the script file name, for example:
C:\> test.py
If you have a relatively recent python installation, this will be associated with the python launcher (py.exe) and be equivalent to running
C:\> py test.py
If you only have one version of python installed this will run it with this, but the python launcher supports multiple ways to customize how it behaves with multiple versions of python.
Additionally, and as stated above, you can run the script with just the python command as well. The main difference is that running it with the python command allows you to specify exactly which installation gets ran, using the script name alone (or the py.exe version), will allow the system to select which installation gets ran.
Related
I would like to simplify running Python scripts from within the Python shell. In Python 2, you could just use execfile(path). But in Python 3 it's harder to remember:
exec(open(path).read())
So I want a function to run a script, as simple as run(path). I can do this from the Python shell:
def run(filename):
source = open(filename).read()
code = compile(source, filename, 'exec')
exec(code)
Then I can just type in run(path). This works great, and now I want to simplify things by defining the run function every time I launch Python 3.
I'd like to configure my ~/.zshenv with a zsh alias or function (say, py) that launches Python and tells it to define the run function. So that's where I'm stumped. What would a such a zsh command look like? I've tried and failed with things like:
py () {
python -c "\
def run(filename): \
source = open(filename).read() \
code = compile(source, filename, 'exec') \
exec(code)" \
}
But that fails miserably:
% py
File "<string>", line 1
def run(filename): source = open(filename).read() code = compile(source, filename, 'exec') exec(code)
IndentationError: unexpected indent
%
And even if it were to work, it would drop back out of the Python shell once the function was defined. Obviously I don't know what I'm doing here. Any pointers?
Also… please don't assume I have asked the right question. Usually on StackOverflow we try to avoid second-guessing posters' assumptions. But go ahead and second-guess mine if there's a better way to get Python to always define a run function when it is launched.
If you need this function only for interactive shells, you can write it in a file and then run python -i file_with_function.py. The -i option will tell the interpreter to drop into an interactive session after whatever is in the file_with_function.py file runs.
If you want it for any .py file that you will run non-interactively then you can do one of the following:
Create a package that contains your run function and install your package on your interpreter. There is a detailed guide on the Python docs (https://packaging.python.org/tutorials/packaging-projects/).
Add the directory that contains a .py file with your function on the PYTHONPATH environmental variable and import it from there.
In the command which you are passing to Python (using python -c), you start the function definition with a couple of spaces. Spaces at the start of a line are significant in Python. You would get the same error, if you would open a Python shell and write
def foo:
with several spaces in front: Python responds with IndentationError: unexpected indent.
In addition, your use of backslash characters makes all the linefeeds disappear, with the effect that you are going to define the complete function in a single line. This is also invalid in Python, so even if you would fix the initial spaces, you would still get SyntaxError: invalid syntax.
Note that you can use the -m option of Python to load initial definition together with starting Python. You can do a
python -h
to get a list of the valid command line options.
Right up front to be clear, I am not fluent in programming or python, but generally can accomplish what I need to with some research. Please excuse any bad formatting structure, as this is my first post to a board like this
I recently updated my laptop from Ubuntu 18.04 to 20.04. I created a full system backup with Dejadup, which due to a missing file, could not be restored. Research brought me to post on here from 2019 for manually restoring these files. The process called for 2 scripts, 1 to unpack and the second to reconstruct the files, both created by Hamish Downer.
The first,
"for f in duplicity-full.*.difftar.gz; do echo "$f"; tar xf "$f"; done"
seemed to work well and did unpack the files.
The second,
#!/usr/bin/env python3
import argparse
from pathlib import Path
import shutil
import sys"
is the start of a re-constructor script. Using terminal from within the directory I am trying to rebuild I enter the first line and return.
When I enter the second line of code the terminal just "hangs" with no activity, and will only come back to the prompt if I double click the cursor. I receive no errors or warnings. When I enter the third line of code
"from pathlib import Path"
and return I then get an error
from: can't read /var/mail/pathlib
The problem seems to originate with the "import argparse" command and I assume is due to a symlink.
argparse is located in /usr/local/lib/python3.8/dist-packages (1.4.0)
python3 is located in /usr/bin/
Python came with the Ubuntu 20.04 distribution package.
Any help with reconstructing these files would be greatly appreciated, especially in a batch as this script is meant to do versus trying to do them one file at a time.
Update: I have tried adding the "re-constructor" part of this script without success. This is a link to the script I want to use:
https://askubuntu.com/questions/1123058/extract-unencrypted-duplicity-backup-when-all-sigtar-and-most-manifest-files-are
Re-constructor script:
class FileReconstructor():
def __init__(self, unpacked_dir, restore_dir):
self.unpacked_path = Path(unpacked_dir).resolve()
self.restore_path = Path(restore_dir).resolve()
def reconstruct_files(self):
for leaf_dir in self.walk_unpacked_leaf_dirs():
target_path = self.target_path(leaf_dir)
target_path.parent.mkdir(parents=True, exist_ok=True)
with target_path.open('wb') as target_file:
self.copy_file_parts_to(target_file, leaf_dir)
def copy_file_parts_to(self, target_file, leaf_dir):
file_parts = sorted(leaf_dir.iterdir(), key=lambda x: int(x.name))
for file_part in file_parts:
with file_part.open('rb') as source_file:
shutil.copyfileobj(source_file, target_file)
def walk_unpacked_leaf_dirs(self):
"""
based on the assumption that all leaf files are named as numbers
"""
seen_dirs = set()
for path in self.unpacked_path.rglob('*'):
if path.is_file():
if path.parent not in seen_dirs:
seen_dirs.add(path.parent)
yield path.parent
def target_path(self, leaf_dir_path):
return self.restore_path / leaf_dir_path.relative_to(self.unpacked_path)
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument(
'unpacked_dir',
help='The directory with the unpacked tar files',
)
parser.add_argument(
'restore_dir',
help='The directory to restore files into',
)
return parser.parse_args(argv)
def main(argv):
args = parse_args(argv)
reconstuctor = FileReconstructor(args.media/jerry/ubuntu, args.media/jerry/Restored)
return reconstuctor.reconstruct_files()
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
I think you are typing the commands to the shell instead of python interpreter. Please check your prompt, python (started with python3) has >>>.
Linux has an import command (part of the ImageMagick) and understands import argparse, but it does something completely different.
import - saves any visible window on an X server and outputs it as an
image file. You can capture a single window, the entire screen, or any
rectangular portion of the screen.
This matches the described behaviour. import waits for a mouse click and then creates a large output file. Check if there is a new file named argparse.
An executable script contains instruction to be processed by an interpreter and there are many possible interpreters, several shells (bash and alternatives), languages like Perl, Python, etc. and also some very specialized like nft for firewall rules.
If you execute a script from the command line, the shell reads its first line. If it starts with #! characters (called "shebang"), it uses the program listed on that line. (note: /usr/bin/env there is just a helper to find the exact location of a program).
But if you want to use an interpreter interactively, you need to start it explicitly. The shebang line has no special meaning in this situation, only as the very first line of a script. Otherwise it is just a comment and is ignored.
I have downloaded movielens dataset from that hyperlink ml-100k.zip (it is a movie and user information dataset and it is in the older dataset tab)
and i have write the simple MapReduce code like below;
from mrjob.job import MrJob
class MoviesByUserCounter(MRJob):
def mapper(self , key ,line):
(userID,movieID,rating,timestamp)=line.split('\t')
yield userID,movieID
def reducer(self , user , movies):
numMovies=0
for movie in movies:
numMovies=numMovies+1
yield user,numMovies
if __name__=='__main__':
MoviesByUserCounter.run()
I use python 3.5.3 version and PyCharm community edition as a python ide.
I have tried on the command line
python my_code.py
but it doesn't work as i expected actually it works but it waits not response anyhow . it has been running for a while actually it is still going on.it writes on the command line only:
Running step 1 of 1...
reading from STDIN
How could i give the data(u.data : it is the data file that in the ml-100k.zip) in my python program code on command line successfully?If there are any other solutions , it will be great too.
Thanks in advance.
If I am not mistaken, you want to give your data as a command line argument.
You would want to do this using sys.argv. Barring that, look at a CLI (Command Line Interface) library.
Example:
import sys
def main(arg1, arg2, *kwargs)
#do something
if __name__ == "__main__":
#there are not enough args
if len(sys.argv) < 3:
raise SyntaxError("Too few arguments.")
if len(sys.argv) != 3:
# There are keyword arguments
main(sys.argv[1], sys.argv[2], *sys.argv[3:])
else:
# no keyword args.
main(sys.argv[1], sys.argv[2])
In this way, you can pass arguments that are location dependant, like normal python positional arguments, for the first two and keyword arguments in the form a=1.
Example use:
Passing the data file as first argument and a parameter as the second
python my_code.py data.zip 0.1
If you will be using more than a few command line parameters, you will want to spend time with a CLI library so that they are no longer location dependant.
I have two scripts, the main is in Python 3, and the second one is written in Python 2 (it also uses a Python 2 library).
There is one method in the Python 2 script I want to call from the Python 3 script, but I don't know how to cross this bridge.
Calling different python versions from each other can be done very elegantly using execnet. The following function does the charm:
import execnet
def call_python_version(Version, Module, Function, ArgumentList):
gw = execnet.makegateway("popen//python=python%s" % Version)
channel = gw.remote_exec("""
from %s import %s as the_function
channel.send(the_function(*channel.receive()))
""" % (Module, Function))
channel.send(ArgumentList)
return channel.receive()
Example: A my_module.py written in Python 2.7:
def my_function(X, Y):
return "Hello %s %s!" % (X, Y)
Then the following function calls
result = call_python_version("2.7", "my_module", "my_function",
["Mr", "Bear"])
print(result)
result = call_python_version("2.7", "my_module", "my_function",
["Mrs", "Wolf"])
print(result)
result in
Hello Mr Bear!
Hello Mrs Wolf!
What happened is that a 'gateway' was instantiated waiting
for an argument list with channel.receive(). Once it came in, it as been translated and passed to my_function. my_function returned the string it generated and channel.send(...) sent the string back. On other side of the gateway channel.receive() catches that result and returns it to the caller. The caller finally prints the string as produced by my_function in the python 3 module.
You could run python2 using subprocess (python module) doing the following:
From python 3:
#!/usr/bin/env python3
import subprocess
python3_command = "py2file.py arg1 arg2" # launch your python2 script
process = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE)
output, error = process.communicate() # receive output from the python2 script
Where output stores whatever python 2 returned
Maybe to late, but there is one more simple option for call python2.7 scripts:
script = ["python2.7", "script.py", "arg1"]
process = subprocess.Popen(" ".join(script),
shell=True,
env={"PYTHONPATH": "."})
I am running my python code with python 3, but I need a tool (ocropus) that is written with python 2.7. I spent a long time trying all these options with subprocess, and kept having errors, and the script would not complete. From the command line, it runs just fine. So I finally tried something simple that worked, but that I had not found in my searches online. I put the ocropus command inside a bash script:
#!/bin/bash
/usr/local/bin/ocropus-gpageseg $1
I call the bash script with subprocess.
command = [ocropus_gpageseg_path, current_path]
process = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
output, error = process.communicate()
print('output',output,'error',error)
This really gives the ocropus script its own little world, which it seems to need. I am posting this in the hope that it will save someone else some time.
It works for me if I call the python 2 executable directly from a python 3 environment.
python2_command = 'C:\Python27\python.exe python2_script.py arg1'
process = subprocess.Popen(python2_command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
python3_command = 'python python3_script.py arg1'
process = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I ended up creating a new function in the python3 script, which wraps the python2.7 code. It correctly formats error messages created by the python2.7 code and is extending mikelsr's answer and using run() as recommended by subprocess docs.
in bar.py (python2.7 code):
def foo27(input):
return input * 2
in your python3 file:
import ast
import subprocess
def foo3(parameter):
try:
return ast.literal_eval(subprocess.run(
[
"C:/path/to/python2.7/python.exe", "-c", # run python2.7 in command mode
"from bar import foo27;"+
"print(foo27({}))".format(parameter) # print the output
],
capture_output=True,
check=True
).stdout.decode("utf-8")) # evaluate the printed output
except subprocess.CalledProcessError as e:
print(e.stdout)
raise Exception("foo27 errored with message below:\n\n{}"
.format(e.stderr.decode("utf-8")))
print(foo3(21))
# 42
This works when passing in simple python objects, like dicts, as the parameter but does not work for objects created by classes, eg. numpy arrays. These have to be serialized and re-instantiated on the other side of the barrier.
Note: This was happening when running my python 2.x s/w in the liclipse IDE.
When I ran it from a bash script on the command line it didn't have the problem.
Here is a problem & solution I had when mixing python 2.x & 3.x scripts.
I am running a python 2.6 process & needed to call/execute a python 3.6 script.
The environment variable PYTHONPATH was set to point to 2.6 python s/w, so it was choking on the followng:
File "/usr/lib64/python2.6/encodings/__init__.py", line 123
raise CodecRegistryError,\
This caused the 3.6 python script to fail.
So instead of calling the 3.6 program directly I created a bash script which nuked the PYTHONPATH environment variable.
#!/bin/bash
export PYTHONPATH=
## Now call the 3.6 python scrtipt
./36psrc/rpiapi/RPiAPI.py $1
Instead of calling them in python 3, you could run them in conda env batch by creating a batch file as below:
call C:\ProgramData\AnacondaNew\Scripts\activate.bat
C:\Python27\python.exe "script27.py"
C:\ProgramData\AnacondaNew\python.exe "script3.py"
call conda deactivate
pause
I recommend to convert the Python2 files to Python3:
https://pythonconverter.com/
I'm using similar approach to call python function from my shell script:
python -c 'import foo; print foo.hello()'
But I don't know how in this case I can pass arguments to python script and also is it possible to call function with parameters from command line?
python -c 'import foo, sys; print foo.hello(); print(sys.argv[1])' "This is a test"
or
echo "Wham" | python -c 'print(raw_input(""));'
There's also argparse (py3 link) that could be used to capture arguments, such as the -c which also can be found at sys.argv[0].
A second library do exist but is discuraged, called getopt.getopt.
You don't want to do that in shell script.
Try this. Create a file named "hello.py" and put the following code in the file (assuming you are on unix system):
#!/usr/bin/env python
print "Hello World"
and in your shell script, write something lke this
#!/bin/sh
python hello.py
and you should see Hello World in the terminal.
That's how you should invoke a script in shell/bash.
To the main question: how do you pass arguments?
Take this simple example:
#!/usr/bin/env python
import sys
def hello(name):
print "Hello, " + name
if __name__ == "__main__":
if len(sys.argv) > 1:
hello(sys.argv[1])
else:
raise SystemExit("usage: python hello.py <name>")
We expect the len of the argument to be at least two. Like shell programming, the first one (index 0) is always the file name.
Now modify the shell script to include the second argument (name) and see what happen.
haven't tested my code yet but conceptually that's how you should go about
edit:
If you just have a line or two simple python code, sure, -c works fine and is neat. But if you need more complex logic, please put the code into a module (.py file).
You need to create one .py file.
And after you call it this way :
python file.py argv1 argv2
And after in your file, you have sys.argv list, who give you list of argvs.