I'm very new to coding and software, so please stick with me. I am trying to execute a command in my Raspberry Pi terminal via a Python script. I want to be able to run this pi script from the Desktop. The command to execute is (rpi-deep-pantilt-env) pi#raspberrypi:~/rpi-deep-pantilt $ rpi-deep-pantilt detect So as you can see, I need to cd into rpi-deep-pantilt, then activate my virtual environment, then run the command all via the py script.
A simple shell script to do what you ask:
#!/bin/sh
cd "$HOME"/rpi-deep-pantilt
. ./rpi-deep-pantilt-env/bin/activate
./rpi-deep-pantilt detect "$#"
Most or all of this is probably unnecessary. I guess you could run
#!/bin/sh
d="$HOME"/rpi-deep-pantilt
exec "$d"/rpi-deep-pantilt-env/bin/python "$d"/rpi-deep-pantilt detect "$#"
though if your Python script has hardcoded file paths which require it to run in a specific directory, that's a bug which will prevent this from working.
The "$#" says to pass on any command-line arguments, so if you saved this script as pant, running pant blind mice will pass on the arguments blind and mice to your Python script. (Of course, if it doesn't accept additional command-line arguments after detect, this is unimportant, but I'd still pass them on so you can generate an error message, rather than have them be ignored as if they were not there.)
Related
I need to start a python program when the system boots. It must run in the background (forever) such that opening a terminal session and closing it does not affect the program.
I have demonstrated that by using tmux this can be done manually from a terminal session. Can the equivalent be done from a script that is run at bootup?
Then where done one put that script so that it will be run on bootup.
Create a startup script that runs on boot and launches the desired Python program in the background.
Here are the steps:
Create a shell script that launches the Python program in the background:
#!/bin/sh
python /path/to/your/python/program.py &
Make the shell script executable:
chmod +x /path/to/your/script.sh
Add the script to the startup applications:
On Ubuntu, this can be done by going to the Startup Applications program and adding the script.
On other systems, you may need to add the script to the appropriate startup folder, such as /etc/rc.d/ or /etc/init.d/.
After these steps, the Python program should start automatically on boot and run in the background.
It appears that in addition to putting a script that starts the program in /etc/init.d, one also has to put a link in /etc/rc2.d with
sudo ln -s /etc/init.d/scriptname.sh
sudo mv scriptname.sh S01scriptname.sh
The S01 was just copied from all the other files in /etc/rc2.d
I want to a up a cron job to run a python script each day within a virtual environment. So I've tried to set up a cron job but it does not seem to execute.
If I were to run the program from terminal normally, I would type:
source ig/venv/bin/activat enter to activate my virtual environment
cd ig/mybot/src/ navigate to my directory
python ultimate.py run my program
SO FAR this is my cron job. I've set it to 1 to run every minute just so I can see that it is working but nothing happens.
1 * * * * source ig/venv/bin/activate && cd ig/mybot/src/ && python ultimate.py
Edit: I have updated my program so no command line prompts are required. I am just needing to run these three simple commands.
You can wrap this up with another python script itself. Create a new python script and run it instead of cron.
Have a look into subprocess module.
Example:
Your command would become subprocess.call(['source','ig/venv/bin/activate'])
inside the wrapper python script.
Also, input("Enter the value") will prompt you for user input.
With the above two, your problem will be solved pythonically.
I'm not sure if it's a good idea, but you could do a script like this.
#!/usr/bin/env bash
PYTHON_PROJECT_DIR=/path/to/python/project/dir
pushd ${PYTHON_PROJECT_DIR}
VALUES="first line of stream\nsecondline of stream\n"
pipenv run /path/to/your/script.py < (echo -e $VALUES)
popd
pushd and popd are commands to move with the directory stack, so you'll be in the directory in the top of the stack, so by adding one directory, you move to the working directory, and by poping you'll get back to the initial position.
Using pipenv allows you to run the scripts in the virtual enviroment (It's not that hard to configure), that way you'll use the enviroment variables in the .env files for the project, and you'll only use the dependencies of this project. (python related).
If you pass the values like this, the python script when ever it requests a value from stdin it will use the values that you echoed (line by line, first line is first input and so on)
This could be a way.
Personally when ever I do cronjobs I like to run directly bash scripts, because, I could add extra logging, so having the wrapper script doesn't seem that unreasonable.
Another thing you could do, Is to get the python executable path (for the virtual enviroment), and use that as interpreter, by replacing the #!/usr/bin/env python to #!/path/to/pythons/virtual/env/interpreter
but you won't get the .env variables (may be there is a way to actually get them.
When I run my python script in the shell terminal, it works
sudo myscript.py --version=22 --base=252 --hosts="{'hostA':[1],'hostB':[22]}"
But when I run in Hudson and Jenkins, using Execute Shell step, somehow, the string --hosts="{'hostA':[1],'hostB':[22]}" is interpreted as
sudo myscript.py --version=22 --base=252 '--hosts="{'hostA':[1],'hostB':[22]}"'
How do we overcome this so that our script would run in Jenkins and Hudson ?
Thank you.
Sincerely
It looks like you're encountering a battle-of-the-quoted-strings type situation due to your use of quotes directly and the fact that Jenkins is shelling out from a generated temp shell script.
I find the best thing to do with Jenkins is to create a bash script that wraps the commands you want to run (and you can also have it do any other environment-related setup you may want to have it do, such as source a config bash script that sets up other env vars).
You can have it accept arguments that may vary from the command line, which can be passed to it from the Jenkins config. So any of the interpolation then happens within the script -- you're just passing strings. (In particular, in this case, you'll have the hosts arg be "{'hostA':[1],'hostB':[22]}", which will be passed to the shell script, and then interpolated, with the double quotes re-included.
So, to that end, say you have a jenkins_run.sh script that runs a command like this:
myscript.py --version=$VERSION --base=$BASE --hosts="$HOSTS"
Where the variables are passed in as arguments and assigned prior to that (you could directly use $0, $1 et al if you want.
I would also be cautious using sudo in conjunction with a Jenkins run, since that could end up prompting for I/O. I would instead recommend setting the permissions on the script such that the using under which Jenkins is running can simply execute the script.
I am trying to write what should be a super simple bash script. Basically activate a virtual env and than change to the working directory. A task i do a lot and condesing to one command just made sense.
Basically ...
#!/bin/bash
source /usr/local/turbogears/pps_beta/bin/activate
cd /usr/local/turbogears/pps_beta/src
However when it runs it just dumps back to the shell and i am still in the directory i ran the script from and the environment isn't activated.
All you need to do is to run your script with the source command. This is because the cd command is local to the shell that runs it. When you run a script directly, a new shell is executed which terminates when it reaches the script's end of file. By using the source command you tell the shell to directly execute the script's instructions.
The value of cd is local to the current script, which ends when you fall off the end of the file.
What you are trying to do is not "super simple" because you want to override this behavior.
Look at exec for replacing the current process with the process of your choice.
For feeding commands into an interactive Bash, look at the --rcfile option.
I imagine you wish your script to be dynamic, however, as a quick fix when working on a new system I create an alias.
begin i.e
the env is called 'py1' located at ~/envs/py1/ with a repository
location at ~/proj/py1/
alias py1='source ~/envs/py1/bin/activate; cd ~/proj/py1/;
end i.e
You can now access your project and virtualenv by typing py1 from anywhere in the CLI.
I know that this is no where near ideal, violates DRY, and many other programming concepts. It is just a quick and dirty way of getting your env and project accessible quickly without having to setup the variables.
I know that I'm late to the game here, but may I suggest using virtualenvwrapper? It provides a nice bash hook that appears to do exactly what you want.
Check out this tutorial: http://blog.fruiapps.com/2012/06/An-introductory-tutorial-to-python-virtualenv-and-virtualenvwrapper
I can't seem to figure out how to get his bash script working.
#!/bin/bash
export WORKON_HOME=~/.envs
source /usr/local/bin/virtualenvwrapper.sh
workon staging_env
It is using viretualenv and virualenvwrapper in order to use a Python virtual environment.
Typing these commands in the shell work perfectly fine, running it as a bash script does not work though.
Any ideas?
When you run a script, it creates its own instance of the shell (bash, in this case). Because of this, the changes are lost when the script ends and the script's shell is closed.
To make the changes stick, you'll have to source the script instead of running it.