How to 'pipe' password to remote.update() with gitPython - python

I am trying to write a script (probably python) that needs to fetch a remote repo (housed with Stash via git), and checkout a specific commit (based on the hash). The problem is that this needs to happen 'blindly' to the user, but it pauses for a password. I need to figure out a way to pipe (or proc.communicate() (or something) the password to the repo.fetch() or origin.update() proc.
Currently, I have code that's something like this:
remoteUrl = 'http://uname#build:7990'
commitId = '9a5af460615'
pw = 'MyPassword'
repo = git.Repo('C:\\Myfolder\\MyRepo')
proc = subprocess.Popen('echo pw | repo.git.fetch()', shell=True, stdin = PIPE)
but it doesn't seem to work
if I done try the echo/pipe, but follow repo.git.fetch() with proc.communicate(pw), I get error:
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Program Files (x86)\Python34\lib\subprocess.py", line 941, in communicate
self.stdin.write(input)
TypeError: 'str' does not support the buffer interface
Finally, I've also tried adding:
o = repo.remotes.origin
proc = subprocess.Popen('o.update()', shell=True, stdin = PIPE)
proc.communicate(pw)
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Program Files (x86)\Python34\lib\subprocess.py", line 941, in communicate
self.stdin.write(input)
TypeError: 'str' does not support the buffer interface
But to no avail, as you can see from the error.
I think I'm over-complicating this, as it seems gitpython probably has a good way to send the pw to o.update() or repo.git.fetch() without using the subprocess?
EDIT:
I'm hoping for code something along these lines:
remoteUrl = 'http://uname#build:7990'
commitId = '9a5af460615'
pw = 'MyPassword'
repo = git.Repo('C:\\Myfolder\\MyRepo')
repo.fetch(remoteUrl)
repo.checkout(commitId) # or maybe repo.pull, or something?
That is more pseudocode than anything else, but it may help you see what I'm hoping for. Also, I want to force through any 'hiccups' I don't want detached head warnings or anything, I want to completely replace the local working copy with the remote at the specified commit.

Instead of using http use ssh.
Once you set up your ssh keys the authentication is done "silently" with the ssh keys and there will be no need to enter password ever again.
Generating ssh keys:
ssh-keygen -t rsa -C 'email#address'
Once you create your keys add them to the central machine or to your central git software and update your fetch url to clone using ssh instead of http

Related

Extended ping on Cisco Device

I'm newbie in Python. I try make a script to perform automatic "extended ping".
The manual cisco flow is:
switch#**ping**
Protocol [ip]:
Target IP address: **X.X.X.X**
Repeat count [5]: **1000**
Datagram size [100]: **1500**
Timeout in seconds [2]:
Extended commands [n]:
Sweep range of sizes [n]:
####################Command Start####################
I try to use the command: "net_connect.send_command" from Netmiko and doesn't work.
Ping_Extended = [ 'ping','\n','X.X.X.X','1000','1500','\n','\n','\n' ]
Ping_TASA = net_connect.send_command(Ping_Extended)
Error: Traceback (most recent call last):
File "VLAN1.py", line 124, in <module>
Ping_Extended = Ping_Extended.rstrip()
AttributeError: 'list' object has no attribute 'rstrip'
can Someone help me?. if another method exist please shared me.
Thanks a lot!
I haven't used that library, so I'm not sure how it works, I'm using paramiko or telnetlib, depending on the available service on the device.
My ping command on Cisco looks something like this:
def ping(dest, count=5, size=None, interval=None, timeout=None, source=None):
# ignore the "interval" it's there in order to have the same signature
# on all devices, Cisco doesn't accept interval parameter
cmd = "ping %s repeat %s" % (dest, count)
for x in [("size", size), ("timeout", timeout), ("source", source)]:
if x[1]:
cmd += " %s %s" % x
cmd += " validate"
# the "validate" seemed to be required in order to get extra statistics
# run the command, get the output, parse it
For example by calling ping("8.8.8.8", 3, 128, 1, 2, "86.68.86.68") will end up running ping 8.8.8.8 repeat 3 size 128 timeout 2 source 86.68.86.68 validate on the device.
Side note: instead of calling ping with no arguments and wait for the prompts, try adding "?" at end of the line (ping ?) in order to discover the available options, pretty much as the bash-completion works with Tab. I mean, from what I've seen on the devices I've worked with, you don't have to follow the flow, you should be able to execute the ping with one single command invocation.
I've taken look at the library you are using, I've noticed that the send_command accepts an argument expect_string you could use it to detect the new/different prompt, I think that your code should be something like this:
cmds = ['ping', 'ip', 'X.X.X.X','1000','1500','2','n','n' ]
for cmd in cmd[:-1] :
net_connect.send_command(cmd, expect_string='\] ?: ?')
output = net_connect.send_command(cmds[-1])
I've added all the defaults to the list of commands to send. If you don't want to send them, replace them with "" (empty strings).
I solved the issue and I share with you.
output = [net_connect.send_command("ping", expect_string='#?'),
net_connect.send_command("ip", expect_string=':?'),
net_connect.send_command("192.168.1.254", expect_string=':?'),
net_connect.send_command("1000", expect_string=':?'),
net_connect.send_command("1500", expect_string=':?'),
net_connect.send_command("2", expect_string=':?'),
net_connect.send_command("n", expect_string=':?'),
net_connect.send_command("n", expect_string=':?', delay_factor=140)]
print output[-1]
Best regards

How do I pass applescript arguments from a python script?

I have an applescript that takes in two parameters on execution.
on run {targetBuddyPhone, targetMessage}
tell application "Messages"
set targetService to 1st service whose service type = iMessage
set targetBuddy to buddy targetBuddyPhone of targetService
send targetMessage to targetBuddy
end tell
end run
I then want this script to execute from within a python script. I know how to execute a applescript from python, but how do I also give it arguments? Here is the python script that I currently have written out.
#!/usr/bin/env python3
import subprocess
def run_applescript(script, *args):
p = subprocess.Popen(['arch', '-i386', 'osascript', '-e', script] +
[unicode(arg).encode('utf8') for arg in args],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
err = p.wait()
if err:
raise RuntimeError(err, p.stderr.read()[:-1].decode('utf8'))
return p.stdout.read()[:-1].decode('utf8')
The error I receive after trying to execute this code in the terminal is:
Traceback (most recent call last):
File "messageExecuter.py", line 14, in <module>
run_applescript("sendMessage.scpt",1111111111,"hello")
File "messageExecuter.py", line 11, in run_applescript
raise RuntimeError(err, p.stderr.read()[:-1].decode('utf8'))
RuntimeError: (1, u'arch: posix_spawnp: osascript: Bad CPU type in executable')
Clue is in the error message. Delete 'arch', '-i386' from arguments list, as osascript is 64-bit only.

printing out a text file with default printer [duplicate]

This question already has answers here:
subprocess.Popen stdin read file
(3 answers)
Closed 6 years ago.
for a little project of my own, I am trying to write a program that prints out the contents of a file on the computers default printer.
I know theres alot of similar questions around, but none of them works on my pc (Linux mint 17.3)
here is one that I tried, it got the closest to what i needed:
from subprocess import Popen
from cStringIO import StringIO
# place the output in a file like object
sio = StringIO("test.txt")
# call the system's lpr command
p = Popen(["lpr"], stdin=sio, shell=True)
output = p.communicate()[0]
this gives me the following error:
Traceback (most recent call last):
File "/home/vandeventer/x.py", line 8, in <module>
p = Popen(["lpr"], stdin=sio, shell=True)
File "/usr/lib/python2.7/subprocess.py", line 702, in __init__
errread, errwrite), to_close = self._get_handles(stdin, stdout, stderr)
File "/usr/lib/python2.7/subprocess.py", line 1117, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
Doe anyone out there know hoe one could implement this in python? it really does not have to work on windows
Regards
Cid-El
you don't have to use StringIO for this. Just use the pipe feature of subprocess and write your data to p.stdin:
from subprocess import Popen
# call the system's lpr command
p = Popen(["lpr"], stdin=subprocess.PIPE, shell=True) # not sure you need shell=True for a simple command
p.stdin.write("test.txt")
output = p.communicate()[0]
as a bonus, this is Python 3 compliant (StringIO has been renamed since :))
BUT: that would just print a big white page with one line: test.txt. lpr reads standard input and prints it (that's still an interesting piece of code :))
To print the contents of your file you have to read it, and in that case it's even simpler since pipe & files work right away together:
from subprocess import Popen
with open("test.txt") as f:
# call the system's lpr command
p = Popen(["lpr"], stdin=f, shell=True) # not sure you need shell=True for a simple command
output = p.communicate()[0]

how to prevent failure of subprocess stopping the main process in python

I wrote a python script to run a command called "gtdownload" on a bunch of files with multiprocessing. The function "download" is where I am having trouble with.
#/usr/bin/env python
import os, sys, subprocess
from multiprocessing import Pool
def cghub_dnld_file(file1, file2, file3, np):
<open files>
<read in lines>
p = Pool(int(np))
map_args = [(line.rstrip(),name_lines[i].rstrip(),bar_lines[i].rstrip()) for i, line in enumerate(id_lines)]
p.map(download_wrapper,map_args)
def download(id, name, bar):
<check if file has been downloaded, if not download>
<.....>
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dnld_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dnld_cmd,shell=True)
def download_wrapper(args):
return download(*args)
def main():
<read in arguments>
<...>
cghub_dnld_file(file1,file2,file3,threads)
if __name__ == "__main__":
main()
If this file does not exist in the database, gtdownload would quit which also kills my python job with the following error:
Traceback (most recent call last):
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 102, in <module>
main()
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 98, in main
cghub_dnld_file(file1,file2,file3,threads)
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 22, in cghub_dnld_file
p.map(download_wrapper,map_args)
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
OSError: [Errno 2] No such file or directory
The actual error message from gtdownload :
Welcome to gtdownload-3.8.5a.
Ready to download
Communicating with GT Executive ...
Headers received from the client: 'HTTP/1.1 100 Continue
HTTP/1.1 404 Not Found
Date: Tue, 29 Jul 2014 18:49:57 GMT
Server: Apache/2.2.15 (Red Hat and CGHub)
Strict-Transport-Security: max-age=31536000
X-Powered-By: PHP/5.3.3
Content-Length: 669
Connection: close
Content-Type: text/xml
'
Error: You have requested to download a uuid which either does not exist within the system, or has not yet reached the 'live' state. The requested action will not be performed. Please double check the supplied uuid or contact thelpdesk for further assistance.
I would like the script to skip the one that does not exist and start gtdownload on the next one. I tried to output the stderr of subprocess.call to a pipe and see if there is the "error" keyword. But it seems it stops at the exact subprocess.call command. Same thing with os.system.
I made a MCV case without the multiprocessing and subprocess did not kill the main process at all. Looks like multiprocessing messes things up although I had it run with 1 thread just for testing.
#!/usr/bin/env python
import subprocess
#THis is the id that gtdownload had problem with
id = "df1e073f-4485-4d5f-8659-cd8eaac17329"
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
print dlnd_cmd
subprocess.call(dlnd_cmd,shell=True)
print "done"
Clearly multiprocessing conflicts subprocess.call but it is not clear to me why.
What is the best way to avoid the failure of subprocess killing the main process?
Handle the exception in some appropriate way and move on, of course.
try:
subprocess.call(dlnd_cmd)
except OSError as e:
print 'failed to download: {!r}'.format(e)
However, this may not be appropriate here. The kinds of exceptions that subprocess.call raises are usually not transient things that you can just log and work around; if it's not working now, it will continue to not work forever until you fix the underlying problem (a bug in your script, or gtdownload not being installed right, or whatever).
For example, if the code you showed us is your actual code:
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dlnd_cmd)
… then this is guaranteed to raise an OSError for the reason explained in dano's answer: call (without shell=True) will try to take that entire string—spaces, shell-redirection, etc.—as the name of an executable program to find on your $PATH. And there is no such program. So it will raise an OSError(errno.ENOENT). (Which is exactly what you're seeing.) Just logging that doesn't do you any good; it's a good thing that your entire process is exiting, so you can debug that problem.
subprocess.call should not kill the main process. Something else must be wrong with your script or your conclusions about the script's behaviour is wrong. Did you try printing some trace output after the subprocess call?
You have to use shell=True to use subprocess.call with a string for an argument (and with shell redirection):
subprocess.call(dlnd_cmd, shell=True)
Without shell=True, subprocess tries to treat your entire command string like a single executable name, which of course doesn't exist, and leads to the No such file or directory exception.
See this answer for more info on when to use a string vs. when to use a sequence with subprocess.

How to check results of wget/urllib2 in Python?

I am brand new to Python and am trying to write a monitor to determine whether a Java web app (WAR file) running on localhost (hosted by Apache Tomcat) is running or not. I had earlier devised a script that ran:
ps -aef | grep myWebApp
And inspected the results of the grep to see if any process IDs came back in those results.
But it turns out that the host OS only sees the Tomcat process, not any web apps Tomcat is hosting. I next tried to see if Tomcat came with any kind of CLI that I could hit from the terminal, and it looks like the answer is no.
Now, I'm thinking of using wget or maybe even urllib2 to determine if my web app is running by having them hit http://localhost:8080/myWebApp and checking the results. Here is my best attempt with wget:
wgetCmd = "wget http://localhost:8080/myWebApp"
wgetResults = subprocess.check_output([wgetCmd], shell=True, stderr=subprocess.STDOUT)
for line in wgetResults.strip().split('\n'):
if 'failed' in line:
print "\nError: myWebApp is not running."
sys.exit()
My thinking here is that, if the web app isn't running, wget's output should always contain the word "failed" inside of it (at least, from my experience). Unfortunately, when I run this, I get the following error:
Traceback (most recent call last):
File "/home/myUser/mywebapp-mon.py", line 52, in <module>
main()
File "/home/myUser/mywebapp-mon.py", line 21, in main
wgetResults = subprocess.check_output([wgetCmd], shell=True, stderr=subprocess.STDOUT)
File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['wget http://localhost:8080/myWebApp']' returned non-zero exit status 4
Any thoughts as to what's going on here (what the error is)? Also, and more importantly, am I going about this the wrong way? Thanks in advance!
I suggest you try the Requests module. It is much more user-friendly then wget or urllib. Try something like this:
import requests
r = requests.get('http://localhost:8080/myWebApp')
>>> r.status_code
200
>>> r.text
Some text of your webapp
*EDIT * installation instructions http://docs.python-requests.org/en/latest/user/install/
To check url:
import urllib2
def check(url):
try:
urllib2.urlopen(url).read()
except EnvironmentError:
return False
else:
return True
To investigate what kind of an error occurred you could look at the exception instance.
You can also build on using urllib2/requests and interact with Tomcat's manager service if it's installed. My using the list method you can receive the following information:
OK - Listed applications for virtual host localhost
/webdav:running:0
/examples:running:0
/manager:running:0
/:running:0

Categories

Resources