I have a Fabric script with many tasks. I'd like to add a simple "yes/no" confirmation at the very beginning that includes the hosts to be run on and the task requested.
hosts = env.hosts
task_name = ?
if not confirm('Run "%s" on %s?' % (task_name, hosts)):
abort('Aborting per user request.')
So, when I run fab -H user#1.2.3.4 deploy, the confirmation will be Run "deploy" on user#1.2.3.4?
Unlike the well-documented env.hosts, I cannot find an env.task_name variable to achieve this.
Related
I created a Python script that performs several REST API calls to fetch a cookie which then used a password for the openconnect application by piping an echo "my cookie value" to the openconnect command. The purpose is to basically connect to a corporate VPN.
It used to work fine, except that now there is another input to be fed, it appears that there are like 2 gateway servers. And one of them needs to be picked manually and passed to the prompt.
I purposefully only put the part of Python performing the call to openconnect below considering that the REST API calls is already working to fetch a cookie which needs to be passed to the openconnect prompt:
# [...] lots of REST API calls to fetch the portal_userauthcookie:
# for the sake of simplicity let's say it has a dummy value
portal_userauthcookie = "blahblah"
print("portal-userauthcookie = \"{}\"".format(portal_userauthcookie))
# Call openconnect with the cookie / password we got above
cmd = "echo \"{}\"".format(portal_userauthcookie)
cmd += " |"
cmd += " openconnect"
cmd += " --protocol=gp"
cmd += " --usergroup portal:portal-userauthcookie"
cmd += " --user={}".format(username)
cmd += " {}".format(vpn_portal_url)
process = subprocess.Popen(cmd, shell=True, stdin=sys.stdin, stdout=sys.stdout, stderr=sys.stderr)
out, err = process.communicate()
When the script is run:
$ sudo ./bin/python3 ./main.py
[sudo] password for perret:
My username: my-username
My password:
My non-expired smartphone app code: my-code
portal-userauthcookie = "blahblah"
POST another-corporate.url.com&clientVer=4100&clientos=Linux
Connected to [corporate-ip]:443
SSL negotiation with [corporate-url]
Connected to HTTPS on [corporate-url]
SAML login is required via POST to this URL:
<html>
<!-- corporate html -->
</html>
Enter login credentials
portal-userauthcookie:
POST corporate.url.com
2 gateway servers available:
gateway1 (blahblah-url-1)
gateway2 (blahblah-url-2)
Please select GlobalProtect gateway.
GATEWAY: [gateway1|gateway2]:fgets (stdin): Resource temporarily unavailable
How can I feed automatically say gateway1 to the prompt of the popen command?
I tried to add another echo but seems only one can work (the one I am already for passing the cookie that acts as a password).
You can try to use expect tool to automate user input. Just check man expect out. It's a fully-fledged solution to make interactive command-line programs scriptable.
However, in accordance with the openconnect man page, it's possible to specify the cookie via --cookie option instead of using stdin. Then you can try to keep sending the gateway via stdin.
I have a simple request. I want to connect to an already existing google compute engine instance, run a command, and close the connection.
I have used the great sample code here for instance creation and deletion.
Additionally, I have a startup script running which works perfectly.
Now I am reading this article to use paramiko to connect to my instance. This may or may not be the best thing to do, so please correct me if I am going down the wrong path.
I have the following sample code:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect('35.***.***.**',username='user',password='pass')
stdin, stdout, stderr = ssh.exec_command("sudo su -")
stdin, stdout, stderr = ssh.exec_command("ls -l")
stdout.readlines()
Now - I am not sure which username or password I am supposed to use.
When I run this code, I do not get the list of files and directories in my root as I want, but I do get a list of files and directories in the default user account's home - so it is connecting.
My goal is to connect to a gce instance, run a command, and that is it! For some reason it is trickier than I anticipated. Am I doing something wrong here?
If you are facing a similar use case you can explore gcloud ssh. It worked for me, but I cannot comment if this is best practice or not.
My solution here was something like the following:
import subprocess
def check_for_completion(instance_name = ""):
cmd = "gcloud compute ssh %s --zone=us-east1-b --command=\"sudo -S -i -u root -p '' ls /root/temp/ \""%(instance_name)
try:
res = subprocess.check_output(cmd, shell=True)
items = str(res).split('\n')
return {'response':items,'complete':False}
except:
return {'response':None,'complete':True}
Essentially, I need to access a computer, say machine A, which is only accessible via the internal network of my company. I used to be able to set up tcprelay port forwarding to accomplish this but that pipeline has been disabled due to some potential security flaws.
Let’s say my company general network is at
company#10.0.0.1
and the specific machine i want to work with is at
machine#10.0.0.3
Both accounts have password ‘password’
Via terminal and shell commands, I can just hop there using one single command:
https://askubuntu.com/a/311457
or, in steps, it would be:
[on my account] ssh company#10.0.0.1
[on my account] enter password
[on company network] ssh machine #10.0.0.3
[on company network] enter password again
And I’d be logged into the machine I need to communicate with.
However, after hacking away all afternoon I could not get this working with Paramiko. I tried setting up the connection then issuing a client.exec_command() but just cannot get a handle for the specific machine. The rest of my scripts relies on having a paramiko client that can receive commands and return responses, so it would be a very heavy overhead for me to go propagate all changes were I to switch to say fabric or subprocess.
The closest I got to was:
ssh.connect(’10.0.0.1', username=‘company', password=‘password’)
chan = ssh.get_transport().open_session()
chan.get_pty()
chan.exec_command(‘ssh machine#10.0.0.3’)
print chan.recv(1024)
which returned the ‘enter password’ prompt, but running chan.send(‘password’) just ends with a hang.
I’m pulling my hair out at this point and am just reading through the documentation hoping to find what concept I’m missing.
If anyone can give some advice I’d really appreciate it.
Thanks!
Alternative way is to avoid entering password when login to another machine.
This can be done by using ssh-keygen.
Login to first machine (A) with user 'first':
$ ssh-keygen -t rsa
--> Don't enter any passphrase when requested
--> Note down the line "Your public key has been saved in /home/first/.ssh/"
--> This file is the public key of machine 'A'
Now login to second machine(B) using ssh.
Then check for ~/.ssh folder. If no folder, create one.
Create a file with name 'authorized_keys' under ~/.ssh/authorized_keys
Copy the content of file from 'first' user to the file 'authorized_keys'.
is a file with 'id_rsa.pub' from 'first' user login (under /home/first/.ssh/id_rsa.pub)
Now you can login to second machine from first without entering password thru your script.
I worked on a project where it had to log in using username/password over SSH then do the same thing again to another host. I had no control over networks ACLs and SSH keys were not allowed for some reason. You'll need to add paramiko_expect. Here's how I got it to work:
import paramiko
from paramiko_expect import SSHClientInteraction
user1 = 'admin'
pass1 = 'admin'
user2 = 'root'
pass2 = 'root'
# not needed for this example, but included for reference
user_prompt = '.*\$ '
# will match root user prompt
root_prompt = '.*$ '
# will match Password: or password:
pass_prompt = '.*assword: '
# SSH to host1
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh_client.connect(hostname='host1', username=user1, password=pass1)
# Interact with SSH client
with SSHClientInteraction(ssh_client, display=True) as interact:
# Send the command to SSH as root to the final host
interact.send('ssh {}#host2'.format(user2)
# Expect the password prompt
interact.expect(pass_prompt)
# Send the root password
interact.send(pass2)
# Expect the root prompt
interact.expect(root_prompt)
ssh_client.close()
One caveat: if host1 has never connected to host2 using SSH it'll get a warning about host key checking and timeout. You can change the configuration on host1 or just SSH to host1 then from host1 SSH to host2 and type yes and press enter.
Is it possible to have local and remote tasks execute from within the same task method?
e.g., I want to do something like the following:
#fabric.api.task
def Deploy():
PrepareDeploy()
PushDeploy()
execute(Extract())
execute(Start())
Where PrepareDeploy and PushDeploy are local tasks (executing only locally, via the fabric.api.local() method):
#fabric.api.task
#fabric.decorators.runs_once
def PrepareDeploy():
#fabric.api.task
#fabric.decorators.runs_once
def PushDeploy():
And Extract/Start are methods that should be run on the remote hosts themselves:
#fabric.api.task
def Extract():
#fabric.api.task
def Start():
However, when I try to do fab Deploy, I get something like:
[remote1.serv.com] Executing task 'Deploy'
[localhost] local: find . -name "*.java" > sources.txt
...
The first line seems wrong to me (and in fact, causes errors).
You can spawn new task and defining on what hosts should it run, for example - how to create rabbitmq of all hosts are provisioned with puppet with the same erlang cookie.
See around line 114 - there is an executon of the tasks on specific hosts.
https://gist.github.com/nvtkaszpir/17d2e2180771abd93c46
I hope this helps.
I want to make a task use a different set of hosts (role) depending on which network I'm currently in. If I'm in the same network of my servers, I don't need to go through the gateway.
Here's a snippet from my fabfile.py:
env.use_ssh_config = True
env.roledefs = {
'rack_machines': ['rack4', 'rack5', 'rack6', 'rack7'],
'external_rack_machines': ['erack4', 'erack5', 'erack6', 'erack7']
}
#roles('rack_machines')
def host_type():
run('uname -s')
So, for my task host_type(), I'd like its role to be rack_machines if I'm in the same network as rack4, rack5, etc. Otherwise, I'd like its role to be external_rack_machines, therefore going through the gateway to access those same machines.
Maybe there's a way to do this with ssh config alone. Here's a snippet of my ssh_config file as well:
Host erack4
HostName company-gw.foo.bar.com
Port 2261
User my_user
Host rack4
HostName 10.43.21.61
Port 22
User my_user
Role definitions are taken into account after module has been imported. So you can place some code in your fabfile which executes on import, detects network and set appropriate roledefs.
Second way to achieve a goal is to use "flag-task". This is a task which does nothing but set appropriate roledefs. I.e.:
hosts = {
"rack": ["rack1", "rack2"],
"external_rack": ["external_rack1", "external_rack2"]
}
env.roledefs = {"rack_machines": hosts["rack"]}
#task
def set_hosts(hostset="rack"):
if hostset in hosts:
env.roledefs["rack_machines"] = hosts[hostset]
else:
print "Invalid hostset"
#roles("rack_machines")
def business():
pass
And invoke that way: fab set_hosts:external_rack business