Changing dir to execute program in linux - python

I am trying to execute a program from a directory
import os
os.chdir("/home/user/a/b")
with cd("/home/user/a/b"):
run ("./program")
i get cd is not defined...
any help appreciated cheers

I'm not sure what instructions you're following to get what you showed. There is no built-in function called cd or run in Python.
You can call a program in a specific directory using the subprocess module:
import subprocess
subprocess.call("./program", cwd="/home/user/a/b")
The cwd argument causes the call function to automatically switch to that directory before starting the program named in the first argument.

It looks like you are trying to use functionalities of fabric. Make sure fabric is installed, and cd and run are imported from fabric. Something like,
from fabric.context_managers import cd
from fabric.operations import run
import os
os.chdir("/home/user/a/b")
with cd("/home/user/a/b"):
run ("./program")
Save your file as fabfile.py,and from the same directory run it as:
fab -H localhost
For more information on the fabric, checkout: fabric

Related

How do I create terminal commands in my python script?

I want to create terminal commands in my python script.
for example,
$ cd my_project_folder
$ --help (I use a command in this python folder)
outputs
$ this is the help command. You've activated it using --help. This is only possible because you cd'd to the folder and inputted this command.
I am looking for user defined commands ( ones that I've already defined in my python function.)
I am not looking for commands like 'ls' and 'pwd'.
You can use os.system, as below:
import os
os.system('your command')
Use os module if you wanna execute command that is specific to bash or shell that you use
import os
os.system("string command")
As Leemosh sugested you can use click module to make your own commands that are related to scripts
And you can also use sys module to read args that you put affter your script
example
$ python3 name_of_script.py arg_n
import sys
if sys.argv[1] == "commnd":
do something
What you are looking for is creating a setup.py file and defining some entry points. I'd recommend watching https://www.youtube.com/watch?v=GIF3LaRqgXo&ab_channel=CodingTech or https://www.youtube.com/watch?v=4fzAMdLKC5ks&ab_channel=NextDayVideo to better understand how to deal with setup.py.
from setuptools import setup
setup(
name="Name_of_your_app",
version="0.0.1",
desciption="some description",
py_modules=["app"], # I think you don't need that
package_dir={"": "src"}, # and probably you don't need this as well
entry_points={"console_scripts": {"THIS_IS_THE_DEFINED_COMMAND=app:main"}},
)
app is name of the py file, main is function you wanna call.
app.py
def main():
print("hello")

running bash script from python file

I have a bash script which changes the path on my command line,
This one,
#!/usr/bin/env python
cd /mnt/vvc/username/deployment/
I have a python script which i wish to run after the path changes to the desired path,
The script,
#!/usr/bin/env python
import subprocess
import os
subprocess.call(['/home/username/new_file.sh'])
for folder in os.listdir(''):
print ('deploy_predict'+' '+folder)
I get this
File "/home/username/new_file.sh", line 2
cd /mnt/vvc/username/deployment/
^
SyntaxError: invalid syntax
Any suggestions on how can i fix this?thanks in advance
You need to explicitly tell subprocess which shell to run the sh file with. Probably one of the following:
subprocess.call(['sh', '/home/username/new_file.sh'])
subprocess.call(['bash', '/home/username/new_file.sh'])
However, this will not change the python program's working directory as the command is run in a separate context.
You want to do this to change the python program's working directory as it runs:
os.chdir('/mnt/vvc/username/deployment/')
But that's not really great practice. Probably better to just pass the path into os.listdir, and not change working directories:
os.listdir('/mnt/vvc/username/deployment/')

executing standalone fabric script by calling it by its name, without the .py extension

I have a fabric script called fwp.py that I run without calling it throug fab by using:
if __name__ == '__main__':
# imports for standalone mode only
import sys
import fabric.main
fabric.main.main(fabfile_locations=[__file__])
The thing is then have to call the script by calling fwp.py. I'd like to rename it as fwp to be able to call it as fwp. But doing that would result in
Fatal error: Couldn't find any fabfiles!
Is there a way to make Python/Fabric import this file, despite the lack of a ".py" extension?
To reiterate and clarify:
I'm not using the "fab" utility (e.g. as fab task task:parameter); just calling my script as fwp.py task task:parameter, and would like to be able to call it as fwp task task:parameter.
Update
It's not a duplicate of this question. The question is not "How to run a stand-alone fabric script?", but "How to do so, while having a script without a .py" extension.
EDIT: Original answer corrected
The fabric.main.main() function automatically adds .py to the end of supplied fabfile locations (see https://github.com/fabric/fabric/blob/master/fabric/main.py#L93). Unfortunately that function also uses Python's import machinery to load the file so it has to look like a module or package. Without reimplementing much of the fabric.main module I don't think it will be possible. You could try monkey-patching both fabric.main.find_fabfiles and fabric.main.load_fabfiles to make it work.
Origininal answer (wrong)
I can get this to work unaltered on a freshly installed fabric package. The following will execute with a filename fwp and executable permission on version 1.10.1, Python2.7. I would just try upgrading fabric.
#!/usr/bin/env python
from fabric.api import *
import fabric.main
def do():
local('echo "Hello World"')
if __name__ == '__main__':
fabric.main.main(fabfile_locations=[__file__])
Output:
$ ./fwp do
Hello World
Done

Python scripts for writing UNIX commands on terminal

I wish to write a python script that allows me to navigate and git pull multiple repositories. Basically the script should type the following on the command-line:
cd
cd ~/Desktop/Git_Repo
git pull Git_Repo
I am not sure if there is a python library already out there that can perform such a task.
Use subprocess, os, and shlex. This should work, although you might require some minor tweaking:
import subprocess
import shlex
import os
# relative dir seems to work for me, no /'s or ~'s in front though
dir = 'Desktop/Git_Repo'
# I did get fetch (but not pull) to work
cmd = shlex.split('git pull Git_Repo')
# you need to give it a path to find git, this lets you do that.
env = os.environ
subprocess.Popen(cmd, cwd=dir, env=env)
Also, you'll need your login preconfigured.

python scripts issue (no module named ...) when starting in rc.local

I'm facing of a strange issue, and after a couple of hour of research I'm looking for help / explanation about the issue.
It's quite simple, I wrote a cgi server in python and I'm working with some libs including pynetlinux for instance.
When I'm starting the script from terminal with any user, it works fine, no bug, no dependency issue. But when I'm trying to start it using a script in rc.local, the following code produce an error.
import sys, cgi, pynetlinux, logging
it produce the following error :
Traceback (most recent call last):
File "/var/simkiosk/cgi-bin/load_config.py", line 3, in
import cgi, sys, json, pynetlinux, loggin
ImportError: No module named pynetlinux
Other dependencies produce similar issue.I suspect some few things like user who executing the script in rc.local (root normaly) and trying some stuff found on the web without success.
Somebody can help me ?
Thanx in advance.
Regards.
Ollie314
First of all, you need to make sure if the module you want to import is installed properly. You can check if the name of the module exists in pip list
Then, in a python shell, check what the paths are where Python is looking for modules:
import sys
sys.path
In my case, the output is:
['', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/usr/lib/python3.4/lib-dynload', '/usr/local/lib/python3.4/dist-packages', '/usr/lib/python3/dist-packages']
Finally, append those paths to $PATH variable in /etc/rc.local. Here is an example of my rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing
export PATH="$PATH:/usr/lib/python3.4:/usr/lib/python3.4/plat-x86_64-linux-gnu:/usr/lib/python3.4/lib-dynload:/usr/local/lib/python3.4/dist-packages:/usr/lib/python3/dist-packages"
# Do stuff
exit 0
The path where your modules are install is probably normally sourced by .bashrc or something similar. .bashrc doesn't get sourced when it's not an interactive shell. /etc/profile is one place that you can put system wide path changes. Depending on what Linux version/distro it may use /etc/profile.d/ in which case /etc/profile runs all the scripts in /etc/profile.d, add a new shell script there with execute permissions and a .sh extention.

Categories

Resources