I'm trying to use fabric to deploy a Django project and I get this error when I run hg pull:
[myusername.webfactional.com] run: hg pull
[myusername.webfactional.com] out: remote: Warning: Permanently added the RSA host key for IP address '207.223.240.181' to the list of known hosts.
[myusername.webfactional.com] out: remote: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
[myusername.webfactional.com] err: abort: no suitable response from remote hg!
Fatal error: run() encountered an error (return code 255) while executing 'hg pull'
I can run other mercurial commands like hg status, and hg log just fine from my fab file.
I have generated an SSH key on the server and added it to my bitbucket account. This works as I can SSH in and run hg pull and it works fine, it's only when using fabric.
This is my fabfile:
from __future__ import with_statement
from fabric.api import *
env.hosts = ['myusername.webfactional.com']
env.user = "myusername"
def development():
# Update files
local("hg push")
with cd("~/webapps/mysite/mysite"):
run("hg pull")
# Update database
with cd("~/webapps/mysite/mysite"):
run("python2.6 manage.py syncdb")
run("python2.6 manage.py migrate")
# Reload apache
run("~/webapps/mysite/apache2/bin/restart")
Any ideas?
EDIT:
Got this working using https
so instead of
hg pull
I'm using
hg pull https://myusername#bitbucket.org/myusername/mysite
Can't reproduce.
zada$ fab development
[ostars.com] Executing task 'development'
[ostars.com] run: hg pull
[ostars.com] out: pulling from ssh://hg#bitbucket.org/Zada/b
[ostars.com] out: no changes found
Done.
Disconnecting from ostars.com... done.
zada$ hg --version
Mercurial Distributed SCM (version 1.6.3)
zada$ ssh ostars.com "hg --version"
Mercurial Distributed SCM (version 1.6)
zada$ fab --version
Fabric 0.9.2
Possible reasons: versions mismatch. Or just a glitches on Butbucket :)
Try run("hg pull") to be more verbose.
To use SSH to clone or pull or push repository in BitBucket you need to follow this instruction (this document is for Mercurial on Mac OSX or Linux) :
https://confluence.atlassian.com/pages/viewpage.action?pageId=270827678
If you want to setup other ssh to to work with bitbucket, here is the full documentation :
https://confluence.atlassian.com/display/BITBUCKET/How+to+install+a+public+key+on+your+Bitbucket+account
Related
I'm new to Jenkins and am trying to set up a server to run selenium tests from a GitHub repo. I'm sure that I'm doing something wrong, likely several things, but haven't been able to figure it out.
I have configured the selenium plugin to use the default Selenium hub port 4444.
Project GitHub Configuration
Can't figure out why I'm getting this error. The credentials match the created username and ssh key. I can even access the repo by clicking on GitHub in the project dashboard.
Project Shell Execution Steps
The before-build execution steps. These are the commands I use in the terminal to run the tests locally.
When I build the job it gives the following log:
Started by <user>
Building in workspace /Users/Shared/Jenkins/Home/workspace/Tutorial
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/<repo address>.git # timeout=10
Fetching upstream changes from https://github.com/<repo address>.git
> git --version # timeout=10
using GIT_SSH to set credentials
> git fetch --tags --progress https://github.com/<repo address>.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from https://github.com/<repo address>.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:809)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1076)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1107)
at hudson.scm.SCM.checkout(SCM.java:496)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1281)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1728)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress https://github.com/<repo address>.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: remote: Invalid username or password.
fatal: Authentication failed for 'https://github.com/<repo address>.git/'
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1877)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1596)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:807)
... 11 more
ERROR: null
Finished: FAILURE
If you are using SSH key for Jenkins to authenticate try using SSH version e.g. git#github.com:FOO/BAR.git instead of HTTPS one.
Finally I migrated my development env from runserver to gunicorn/nginx.
It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP.
Any way to avoid the manual restart?
While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option.
So now no third party tools are needed.
One option would be to use the --max-requests to limit each spawned process to serving only one request by adding --max-requests 1 to the startup options. Every newly spawned process should see your code changes and in a development environment the extra startup time per request should be negligible.
Bryan Helmig came up with this and I modified it to use run_gunicorn instead of launching gunicorn directly, to make it possible to just cut and paste these 3 commands into a shell in your django project root folder (with your virtualenv activated):
pip install watchdog -U
watchmedo shell-command --patterns="*.py;*.html;*.css;*.js" --recursive --command='echo "${watch_src_path}" && kill -HUP `cat gunicorn.pid`' . &
python manage.py run_gunicorn 127.0.0.1:80 --pid=gunicorn.pid
I use git push to deploy to production and set up git hooks to run a script. The advantage of this approach is you can also do your migration and package installation at the same time. https://mikeeverhart.net/2013/01/using-git-to-deploy-code/
mkdir -p /home/git/project_name.git
cd /home/git/project_name.git
git init --bare
Then create a script /home/git/project_name.git/hooks/post-receive.
#!/bin/bash
GIT_WORK_TREE=/path/to/project git checkout -f
source /path/to/virtualenv/activate
pip install -r /path/to/project/requirements.txt
python /path/to/project/manage.py migrate
sudo supervisorctl restart project_name
Make sure to chmod u+x post-receive, and add user to sudoers. Allow it to run sudo supervisorctl without password. https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
From my local / development server, I set up git remote that allows me to push to the production server
git remote add production ssh://user_name#production-server/home/git/project_name.git
# initial push
git push production +master:refs/heads/master
# subsequent push
git push production master
As a bonus, you will get to see all the prompts as the script is running. So you will see if there is any issue with the migration/package installation/supervisor restart.
I am trying to set up Salt Stack for local development, but in masterless mode.
I have copied my states (top.sls, mystate.sls) to /srv/salt.
I have followed the instructions on the local development page and the salt masterless quickstart page, but when I run
$ sudo /home/vagrant/.virtualenvs/myenv/bin/salt-call -c /home/vagrant/.virtualenvs/myenv/etc/salt --local salt.highstate -l debug
All I get is
[DEBUG ] Could not LazyLoad salt.highstate
'salt.highstate' is not available.
I'm running salt in a vagrant ubuntu/trusty64 virtualbox virtual machine on a Mac.
It seems like other modules load (I see them in the debug listing) but for some reason highstate (highstate.py?) is not being loaded.
What am I doing wrong? Is there something additional I have to do for masterless development?
I got help on #salt IRC channel from whytewolf - the problem was that the command should be state.highstate (not salt.highstate):
$ sudo /home/vagrant/.virtualenvs/myenv/bin/salt-call -c /home/vagrant/.virtualenvs/myenv/etc/salt --local state.highstate -l debug
Problem solved!
I'm trying to understand the GitHub ssh configuration with Ansible (I'm working on the Ansible: Up & Running book). I'm running into two issues.
Permission denied (publickey) -
When I first ran the ansible-playbook mezzanine.yml playbook, I got a permission denied:
failed: [web] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
msg: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
FATAL: all hosts have already failed -- aborting
Ok, fair enough, I see several people have had this problem. So I jumped to appendix A on running Git with SSH and it said to run the ssh-agent and add the id_rsa public key:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa
Output: Identity AddedI ran ssh-agent -l to check and got the long string: 2048 e3:fb:... But I got the same output. So I checked the Github docs on ssh key generations and troubleshooting which recommended updating the ssh config file on my host machine:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes
But this still provides the same error. So at this point, I start thinking it's my rsa file, which leads me to my second problem.
Key Generation Issues - I tried to generate an additional cert to use, because the Github test threw another "Permission denied (publickey)" error.
Warning: Permanently added the RSA host key for IP address '192.30.252.131' to the list of known hosts.
Permission denied (publickey).
I followed the Github instructions from scratch and generated a new key with a different name.
ssh-keygen -t rsa -b 4096 -C "me#example.com"
I didn't enter a passphrase and saved it to the .ssh folder with the name git_rsa.pub. I ran the same test and got the following:
$ ssh -i ~/.ssh/git_rsa.pub -T git#github.com
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for '/Users/antonioalaniz1/.ssh/git_rsa.pub' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: ~/.ssh/github_rsa.pub
Permission denied (publickey).
I checked on the permissions and did a chmod 700 on the file and I still get Permission denied (publickey). I even attempted to enter the key into my Github account, but first got a message that the key file needs to start with ssh-rsa. So I started researching and hacking. Started with just entering the long string in the file (it started with --BEGIN PRIVATE KEY--, but I omitted that part after it failed); however, Github's not accepting it, saying it's invalid.
This is my Ansible command in the YAML file:
- name: check out the repository on the host
git: repo={{ repo_url }} dest={{ proj_path }} accept_hostkey=yes
vars:
repo_url: git#github.com:lorin/mezzanine-example.git
This is my ansible.cfg file with ForwardAgent configured:
[defaults]
hostfile = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes
The box is an Ubuntu Trusty64 using Mac OS. If anyone could clue me into the file permissions and/or Github key generation, I would appreciate it.
I suspect the key permissions issue is because you are passing the public key instead of the private key as the arugment to "ssh -i". Try this instead:
ssh -i ~/.ssh/git_rsa -T git#github.com
(Note that it's git_rsa and not git_rsa.pub).
If that works, then make sure it's in your ssh-agent. To add:
ssh-add ~/.ssh/git_rsa
To verify:
ssh-add -l
Then check that Ansible respects agent forwarding by doing:
ansible web -a "ssh-add -l"
Finally, check that you can reach GitHub via ssh by doing:
ansible web -a "ssh -T git#github.com"
You should see something like:
web | FAILED | rc=1 >>
Hi lorin! You've successfully authenticated, but GitHub does not provide shell access.
I had the same problem, it took me some time, but I have found the solution.
The problem is the URL is incorrect.
Just try to change it to:
repo_url: git://github.com/lorin/mezzanine-example.git
I ran into this issue and discovered it by turning verbosity up on the ansible commands (very very useful for debugging).
Unfortunately, ssh often throws error messages that don't quite lead you in the right direction (aka permission denied is very generic...though to be fair that is often thrown when there is a file permission issue so perhaps not quite so generic). Anyways, running the ansible test command with verbose on helps recreate the issue as well as verify when it is solved.
ansible -vvv all -a "ssh -T git#github.com"
Again, the setup I use (and a typical one) is to load your ssh key into the agent on the control machine and enable forwarding.
steps are found here Github's helpful ssh docs
it also stuck out to me that when I ssh'd to the box itself via the vagrant command and ran the test, it succeeded. So I had narrowed it down to how ansible was forwarding the connection. For me what eventually worked was setting
[paramiko_connection]
record_host_keys = False
In addition to the other config that controls host keys verification
host_key_checking = False
which essentially adds
-o StrictHostKeyChecking=no
to the ssh args for you, and
-o UserKnownHostsFile=/dev/null
was added to the ssh args as well
found here:
Ansible issue 9442
Again, this was on vagrant VMs, more careful consideration around host key verification should be taken on actual servers.
Hope this helps
I'm trying to run a Python script I've uploaded as part of my AWS Elastic Beanstalk application from my development machine, but can't figure out how to. I believe I've located the script correctly, but when I attempt to run it under SSH, I get an import error.
For example, I have a Flask-Migrate migration script as part of my application (pretty much the same as the example in the documentation), but after successfully SSHing to my EB instance with
> eb ssh
and locating the script with
$ sudo find / -name migrate.py
when I run in the directory (/opt/python/current) where I located it with
$ python migrate.py db upgrade
at the SSH prompt I get
Traceback (most recent call last):
File "db_migrate.py", line 15, in <module>
from flask.ext.script import Manager
ImportError: No module named flask.ext.script
even though my requirements.txt (present along with the rest of my files in the same directory) has flask-script==2.0.5.
On Heroku I can accomplish all of this in two steps with
> heroku run bash
$ python migrate.py db upgrade
Is there equivalent functionality on AWS? How do I run a Python script that is part of an application I uploaded in an AWS SSH session? Perhaps I'm missing a step to set up the environment in which the code runs?
To migrate your database the best is to use container_commands, they are commands that will run every time you deploy your application. There is a good example in the EBS documentation (Step 6) :
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
The reason why you're getting an ImportError is because EBS installs your packages in a virtualenv. Before running arbitrary scripts in your application in SSH, first change to the directory containing your (latest) code with
cd /opt/python/current
and then activate the virtualenv
source /opt/python/run/venv/bin/activate
and set the environment variables (that your script probably expects)
source /opt/python/current/env