I am new to using Node-red and the raspberry pi. I have a python script that would like to run from node-red and receive the mag.payload. I cannot figure the correct command in the daemon node to start the python script. Any help is appreciated.
Current Python script:
import time
import board
import busio
import adafruit_mprls
import RPi.GPIO as GPIO
try:
import RPi.GPIO as GPIO
except RuntimeError:
print("Error importing RPi.GPIO! This is probably because you need
superuser privileges.")
i2c = busio.I2C(board.SCL, board.SDA)
mpr = adafruit_mprls.MPRLS(i2c, psi_min=0, psi_max=25)
"""
import digitalio
reset = digitalio.DigitalInOut(board.D5)
eoc = digitalio.DigitalInOut(board.D6)
mpr = adafruit_mprls.MPRLS(i2c, eoc_pin=eoc, reset_pin=reset,
psi_min=0, psi_max=25)
"""
while True:
print((mpr.pressure,))
time.sleep(1)
The python script is stored at home/pi/Document/pressure.py
I am not sure what the command and arguments should be in the daemon node of node-red. I have tried in
command: usr/bin/python
arguments: home/pi/Documents/prressure.py
Firstly paths need to start with a leading /
So you need to put /usr/bin/python into the command and /home/pi/Documents/prressure.py into the arguments.
The only problem is the script implies it needs to be run as root. You should NOT run Node-RED as root unless you REALLY REALLY know what you are doing. The other option would be to run with sudo in which case you would put /usr/bin/sudo in the command and /usr/bin/python /home/pi/Documents/prressure.py in the arguments. This will only work on a raspberry pi because the pi user is normally allowed to use sudo without a password.
If you want to run a program/script/command from node-red i recommend you checkout Exec Node
Runs a system command and returns its output.
The node can be configured to either wait until the command completes, or to send its output as the command generates it.
The command that is run can be configured in the node or provided by the received message.
For more information check the info tab of the node in Node-Red
Related
I'm trying to write a Python script that starts a subprocess to run an Azure CLI command once the file is executed.
When I run locally, I run:
az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area
I get prompted for an input which looks like:
Which service connection do you want to use to communicate with GitHub?
[1] Create new GitHub service connection
[2] <my connection name>
[3] <org name>
Please enter a choice [Default choice(1)]:
I can type in 2 and press enter then my pipeline is successfully created in Azure DevOps. I would like to run this command being dynamically entered when prompted.
So far I have tried:
import subprocess
cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area
cmd = cmd.split()
subprocess.run(cmd, shell=True)
This will run in the exact same way as when I try to run it locally.
Try to follow answers from here I have also tried:
p = subprocess.run(cmd, input="1", capture_output=True, text=True, shell=True)
print(p)
Which gives me an error saying raise NoTTYException(error_msg)\nknack.prompting.NoTTYException.
Is there a way where I can execute this Python script, and it will run the Azure CLI command then enter 2 when prompted without any manually intervention?
You are trying to solve the wrong problem. az pipeline create takes a --service-connection parameter. You don't need to respond to the prompt, you can provide the service connection value on the command line and skip the prompt entirely.
IMHO, Daniel is right, you're not supposed to deal with stdin in your program.
Nevertheless, if you really need to, you should use pexpect package, which basically opens a process, waits for given output, and then sends input to the process' stdin.
Here's a basic example:
import pexpect
from pexpect.popen_spawn import PopenSpawn
cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area'
child = pexpect.popen_spawn.PopenSpawn('cmd', timeout=1)
child.expect ('.*Please enter a choice.*')
child.sendline ('2')
# child.interact() # Give control of the child to the user.
Have a look at pexpect documentation for more details. MS Windows support is available since v4.0.
Another archaic solution would be to use subprocess the following way, emulating basically what expect would do:
import subprocess
from time import sleep
p = subprocess.Popen(azure_command, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
sleep(.5)
stdout = p.communicate(input=b'2\n')[0]
print(stdout.decode())
Still, best solution is to use non-interactive mode of most CLI programs.
I want to start a Python script with paramiko which connects to my raspberry wich acts as a server. Then after the conection to the raspberry it starts a script like this(to send data to an arduino from another pc):
import tty
import sys
import termios
import serial
import os
arduino = serial.Serial('/dev/ttyUSB0' , 9600)
x = "./mjpg_streamer -i \"./input_uvc.so -d /dev/video0 -y\" -o \"./output_http.so -w ./www\""
os.system(x)
orig_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin)
x = 0
while x != chr(27): # ESC
x=sys.stdin.read(1)[0]
arduino.write(x)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, orig_settings)
This code works okay, kind of a raw_input just to simplify.
I want to connect automatically to the raspberry by ssh, and start a python script that will ask for an input -which in the code above is a constant-.
I thought something like opening a new shell with the script above already iniciated or something like that...
Quick answer, there is not any options that could help you insert password into ssh command. You have to set up a share key-pair to use ssh without password prompt. Searching on the internet can give you ton of answers, for example: http://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id
So at first, set up key-pairs. Then use normal ssh to check whether it was successful. Then finally, in your python script, add some codes to deal with ssh.
I am attempting to to launch a python script from within another python script, but in a minimized console, then return control to the original shell.
I am able to open the required script in a new shell below, but it's not minimized:
#!/usr/bin/env python
import os
import sys
import subprocess
pyTivoPath="c:\pyTivo\pyTivo.py"
print "Testing: Open New Console"
subprocess.Popen([sys.executable, pyTivoPath], creationflags = subprocess.CREATE_NEW_CONSOLE)
print
raw_input("Press Enter to continue...")
Further, I will need to be able to later remotely KILL this shell from the original script, so I suspect I'll need to be explicit in naming the new process. Correct?
Looking for pointers, please. Thanks!
Note: python27 is mandatory for this application. Eventually will also need to work on Mac and Linux.
Do you need to have the other console open? If you now the commands to be sent, then I'd recommend using Popen.communicate(input="Shell commands") and it will automate the process for you.
So you could write something along the lines of:
# Commands to pass into subprocess (each command is separated by a newline)
commands = (
"command1\n" +
"command2\n"
)
# Your process
py_process = subprocess.Popen(*yourprocess_here*, stdin=PIPE, shell=True)
# Feed process the needed input
py_process.communicate(input=commands)
# Terminate when finished
py_process.terminate()
The code above will execute the process you specify and even send commands but it won't open a new console.
I am trying to write a python script which when executes will open a Maya file in another computer and creates its playblast there. Is this possible? Also I would like to add one more thing that the systems I use are all Windows. Thanks
Yes it is possible, i do this all the time on several computers. First you need to access the computer. This has been answered. Then call maya from within your shell as follows:
maya -command myblast -file filetoblast.ma
you will need myblast.mel somewhere in your script path
myblast.mel:
global proc myblast(){
playblast -widthHeight 1920 1080 -percent 100
-fmt "movie" -v 0 -f (`file -q -sn`+".avi");
evalDeferred("quit -f");
}
Configure what you need in this file such as shading options etc. Please note calling Maya GUI uses up one license and playblast need that GUI (you could shave some seconds by not doing the default GUI)
In order to execute something on a remote computer, you've got to have some sort of service running there.
If it is a linux machine, you can simply connect via ssh and run the commands. In python you can do that using paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("echo hello")
Otherwise, you can use a python service, but you'll have to run it beforehand.
You can use Celery as previously mentioned, or ZeroMQ, or more simply use RPyC:
Simply run the rpyc_classic.py script on the target machine, and then you can run python on it:
conn = rpyc.classic.connect("my_remote_server")
conn.modules.os.system('echo foo')
Alternatively, you can create a custom RPyC service (see documentation).
A final option is using an HTTP server like previously suggested. This may be easiest if you don't want to start installing everything. You can use Bottle which is a simple HTTP framework in python:
Server-side:
from bottle import route, run
#route('/run_maya')
def index(name):
# Do whatever
return 'kay'
run(host='localhost', port=8080)
Client-side:
import requests
requests.get('http://remote_server/run_maya')
One last option for cheap rpc is to run maya.standalone from a a maya python ("mayapy", usually installed next to the maya binary). The standalone is going to be running inside a regular python script so it can uses any of the remote procedure tricks in KimiNewts answer.
You can also create your own mini-server using basic python. The server could use the maya command port, or a wsgi server using the built in wsgiref module. Here is an example which uses wsgiref running inside a standalone to control a maya remotely via http.
We've been dealing with the same issue at work. We're using Celery as the task manager and have code like this inside of the Celery task for playblasting on the worker machines. This is done on Windows and uses Python.
import os
import subprocess
import tempfile
import textwrap
MAYA_EXE = r"C:\Program Files\Autodesk\Maya2016\bin\maya.exe"
def function_name():
# the python code you want to execute in Maya
pycmd = textwrap.dedent('''
import pymel.core as pm
# Your code here to load your scene and playblast
# new scene to remove quicktimeShim which sometimes fails to quit
# with Maya and prevents the subprocess from exiting
pm.newFile(force=True)
# wait a second to make sure quicktimeShim is gone
time.sleep(1)
pm.evalDeferred("pm.mel.quit('-f')")
''')
# write the code into a temporary file
tempscript = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
tempscript.write(pycmd)
tempscript.close()
# build a subprocess command
melcmd = 'python "execfile(\'%s\')";' % tempscript.name.replace('\\', '/')
cmd = [MAYA_EXE, '-command', melcmd]
# launch the subprocess
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
# when the process is done, remove the temporary script
try:
os.remove(tempscript.name)
except WindowsError:
pass
I am using Cherrypy framework to run my python code on server. But the process stops working when the load increases.
Every time this happens I have to manually go and start the python code. Is there any way i can use Gunicorn with Cherrypy so that Gunicorn can start the code automatically when it stops working.
Any other solution will also work in this case. Just want to make sure that the python program does not stop working.
I use a cron that checks the memory load every few minutes and resets cherrypy when the memory exceeds 500MB -- so that the web host doesn't complain to me with emails. Something on my server doesn't release memory when a function ends as it should, so this is a pragmatic work around.
This hack may be weird because I reset it using an HTTP request, but that's because I spent hours trying to figure out how to do this withing the BASH and gave up. It works.
CRON PART
*/2 * * * * /usr/local/bin/python2.7 /home/{mypath}/cron_reset_cp.py > $HOME/cron.log 2>&1
And code inside cron_reset_cp.py...
#cron for resetting cherrypy /cp/ when 500+ MB
import os
#assuming starts in /home/my_username/
os.chdir('/home/my_username/cp/')
import mem
C = mem.MemoryMonitor('my_username') #this function adds up all the memory
memory = int(float(C.usage()))
if memory > 500:#MB
#### Tried: pid = os.getpid() #current process = cronjob --- THIS approach did not work for me.
import urllib2
cp = urllib2.urlopen('http://myserver.com/cp?reset={password}')
Then I added this function to reset the cherrypy via cron OR after a github update from any browser (assuming only I know the {password})
The reset url would be http://myserver.com/cp?reset={password}
def index(self, **kw):
if kw.get('reset') == '{password}':
cherrypy.engine.restart()
ip = cherrypy.request.headers["X-Forwarded-For"] #get_client_ip
return 'CherryPy RESETTING for duty, sir! requested by '+str(ip)
The MemoryMonitor part is from here:
How to get current CPU and RAM usage in Python?
Python uses many error handling strategies to control flow. A simple try/except statement could throw an exception if, say, your memory overflowed, a load increased, or any number of issues making your code stall (hard to see without the actual code).
In the except clause, you could clear any memory you allocated and restart your processes again.
Depending on your OS, try the following logic:
Implement a os.pid() > /path/pid.file
Create a service script that connected to your web-port
Try to fetch data
If no data was recieved, kill PID #/path/pid.file
restart script
Your main script:
import os
with open('./pidfile.pid', 'wb') as fh:
fh.write(str(os.getpid()))
... Execute your code as per normal ...
service.py script:
from socket import *
from os import kill
s = socket()
try:
s.connect(('127.0.0.1', 80))
s.send('GET / HTTP/1.1\r\n\r\n')
len = len(s.recv(8192))
s.close()
except:
len = 0
if len <= 0:
with open('/path/to/pidfile.pid', 'rb') as fh:
kill(int(fh.read()))
And have a cronjob (execute in a console):
sudo export EDITOR=nano; crontab -e
Now you're in the text-editor editing your cronjobs, write the following two lines at the bottom:
*/2 * * * * cd /path/to/service.py; python service.py
*/5 * * * * cd /path/to/main/script.py; python script.py
Press Ctrl+X and when asked to save changes, write Y and enjoy.
Also, instead of restarting your script from within service.py script, i'd suggest that service.py kills the PID located in /path/pid.file and let your OS handle starting up your script if the PID is missing in /path/, Linux at least have very nifty features for this.
Best practice Ubuntu
It's more than considered best practice to use the systems service status apache2 for instance, the service scripts lets you reload, stop, start, get job states and what not.
Check out: http://upstart.ubuntu.com/cookbook/#service-job
And check the service scripts for all the other applications and not only use them as skeletons but make sure your application follows the same logic.
Perhaps you need supervisord to monitor your gunicorn process and restart it when it's necessary:
http://www.onurguzel.com/managing-gunicorn-processes-with-supervisor/