Installing Jupyter in AWS EC2 - python

I have been following the steps below to install Jupyter in AWS EC2:
https://chrisalbon.com/aws/basics/run_project_jupyter_on_amazon_ec2/
I gave 8888 as port.
I then launched jupyter notebook:
Then I go on my instance url:
https://ec2-XX-XX-XX-XXX.eu-west-3.compute.amazonaws.com:8888/
I have a public IP so I also tried https://XX-XX-XX-XXX:8888/
But it does not load anything in both ways.
I made sure that 8888 port is authorized in security groups on my EC2 instance.
Any idea how I can deep dive where the issue is?
[EDIT 1]:
I followed these steps:
c = get_config()
# Kernel config
c.IPKernelApp.pylab = 'inline' # if you want plotting support always in your notebook
# Notebook config
c.NotebookApp.certfile = u'/home/ec2-user/Notebooks/certs/Mycert_file.pem' #location of your certificate file
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False #so that the ipython notebook does not opens up a browser by default
c.NotebookApp.password = u'sha1:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' #the encrypted password we generated above
# Set the port to 8888, the port we set up in the AWS EC2 set-up
c.NotebookApp.port = 8888
[EDIT 2]:
Previously to these steps I did this:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout Mycert_file.pem -out Mycert_file.pem
Now, to locate my .pem file I did the following: find /home -name *.pem
I found the location of my .pem file which is /home/ec2-user/Notebooks/certs/Mycert_file.pem
[EDIT 3]:
I will also add that I am already currently running a RStudio session on this instance on a 8787 port. I assume this is not impacting what I am doing trying to install Jupyter) but just wanted to point it out in case of.

So I found the issue, regarding the config file. In the tutorial it said to press esckey to record the config file. But that was not saving the file for me. So I simply used :wq! and it saved it for me.
But I still can't make it work.
So, as advised, I used jupyter notebook --debug
Here are the logs:

Related

Nessus SSL problems [duplicate]

import requests
data = {'foo':'bar'}
url = 'https://foo.com/bar'
r = requests.post(url, data=data)
If the URL uses a self signed certificate, this fails with
requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
I know that I can pass False to the verify parameter, like this:
r = requests.post(url, data=data, verify=False)
However, what I would like to do is point requests to a copy of the public key on disk and tell it to trust that certificate.
try:
r = requests.post(url, data=data, verify='/path/to/public_key.pem')
With the verify parameter you can provide a custom certificate authority bundle
requests.get(url, verify=path_to_bundle_file)
From the docs:
You can pass verify the path to a CA_BUNDLE file with certificates of
trusted CAs. This list of trusted CAs can also be specified through
the REQUESTS_CA_BUNDLE environment variable.
The easiest is to export the variable REQUESTS_CA_BUNDLE that points to your private certificate authority, or a specific certificate bundle. On the command line you can do that as follows:
export REQUESTS_CA_BUNDLE=/path/to/your/certificate.pem
python script.py
If you have your certificate authority and you don't want to type the export each time you can add the REQUESTS_CA_BUNDLE to your ~/.bash_profile as follows:
echo "export REQUESTS_CA_BUNDLE=/path/to/your/certificate.pem" >> ~/.bash_profile ; source ~/.bash_profile
Case where multiple certificates are needed was solved as follows:
Concatenate the multiple root pem files, myCert-A-Root.pem and myCert-B-Root.pem, to a file. Then set the requests REQUESTS_CA_BUNDLE var to that file in my ./.bash_profile.
$ cp myCert-A-Root.pem ca_roots.pem
$ cat myCert-B-Root.pem >> ca_roots.pem
$ echo "export REQUESTS_CA_BUNDLE=~/PATH_TO/CA_CHAIN/ca_roots.pem" >> ~/.bash_profile ; source ~/.bash_profile
All of the answers to this question point to the same path: get the PEM file, but they don't tell you how to get it from the website itself.
Getting the PEM file from the website itself is a valid option if you trust the site, such as on an internal corporate server. If you trust the site, why should you do this? You should do this because it helps protect yourself and others from inadvertently re-using your code on a site that isn't safe.
Here is how you can get the PEM file.
Click on the lock next to the url.
Navigate to where you can see the certificates and open the certificates.
Download the PEM CERT chain.
Put the .PEM file somewhere you script can access it and try verify=r"path\to\pem_chain.pem" within your requests call.
r = requests.get(url, verify='\path\to\public_key.pem')
Setting export SSL_CERT_FILE=/path/file.crt should do the job.
If you're behind a corporate network firewall like I was, ask your network admin where your corporate certificates are, then:
import os
os.environ["REQUESTS_CA_BUNDLE"] = 'path/to/corporate/cert.pem'
os.environ["SSL_CERT_FILE"] = 'path/to/corporate/cert.pem'
This fixed issues I had with requests and openssl.
In a dev environment, using Poetry as virtual env provider on a Mac with Python 3.8 I used this answer https://stackoverflow.com/a/42982144/15484549 as base and appended the content of my self-signed root certificate to the certifi cacert.pem file.
The steps in detail:
cd project_folder
poetry add requests
# or if you use something else, make sure certifi is among the dependencies
poetry shell
python
>>> import certifi
>>> certifi.where()
/path/to/the/certifi/cacert.pem
>>> exit()
cat /path/to/self-signed-root-cert.pem >> /path/to/the/certifi/cacert.pem
python the_script_you_want_to_run.py
I know it is an old thread. However, I run into this issue recently. My python requests code does not accept the self-signed certificate but curl does. It turns out python requests are very strict on the self-signed certificate. It needs to be a root CA certificate. In other words,
Basic Constraints: CA:TRUE
Key Usage: Digital Signature, Non Repudiation, Key Encipherment, Certificate Sign
Incase anyone happens to land here (like I did) looking to add a CA (in my case Charles Proxy) for httplib2, it looks like you can append it to the cacerts.txt file included with the python package.
For example:
cat ~/Desktop/charles-ssl-proxying-certificate.pem >> /usr/local/google-cloud-sdk/lib/third_party/httplib2/cacerts.txt
The environment variables referenced in other solutions appear to be requests-specific and were not picked up by httplib2 in my testing.
You may try:
settings = s.merge_environment_settings(prepped.url, None, None, None, None)
You can read more here: http://docs.python-requests.org/en/master/user/advanced/

Run localhost server in Google Colab notebook

I am trying to implement Tacotron speech synthesis with Tensorflow in Google Colab using this code form a repo in Github, below is my code and working good till the step of using localhost server, how I can to run a localhost server in a notebook in Google Colab?
My code:
!pip install tensorflow==1.3.0
import tensorflow as tf
print("You are using Tensorflow",tf.__version__)
!git clone https://github.com/keithito/tacotron.git
cd tacotron
pip install -r requirements.txt
!curl https://data.keithito.com/data/speech/tacotron-20180906.tar.gz | tar xzC /tmp
!python demo_server.py --checkpoint /tmp/tacotron-20180906/model.ckpt #requires localhost
Unfortunately running in local mode from Google Colab will not help me because to do this I need to download the data in my machine which are too large.
Below is my last output and here I am supposed to open the localhost:8888 to complete the work, so as I mentioned before is there any way to run localhost in Google Colaboratory?
You can do this by using tools like ngrok or remote.it
They give you a URL that you can access from any browser to access your web server running on 8888
Example 1: Tunneling tensorboard running on
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
get_ipython().system_raw('tensorboard --logdir /content/trainingdata/objectdetection/ckpt_output/trainingImatges/ --host 0.0.0.0 --port 6006 &')
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
Running this install ngrok on colab, and makes a link like http://c11e1b53.ngrok.io/
Documentaion for NGROK
Another way of running a publicly accessible server using ngrok:
!pip install pyngrok --quiet
from pyngrok import ngrok
# Terminate open tunnels if exist
ngrok.kill()
# Setting the authtoken (optional)
# Get your authtoken from https://dashboard.ngrok.com/auth
NGROK_AUTH_TOKEN = ""
ngrok.set_auth_token(NGROK_AUTH_TOKEN)
# Open an HTTPs tunnel on port 5000 for http://localhost:5000
public_url = ngrok.connect(port="5000", proto="http", options={"bind_tls": True})
print("Tracking URL:", public_url)
You can use localtunnel to expose the port to the public internet.
Install localtinnel:
!npm install -g localtunnel
Start localtunnel:
!lt --port 8888
Navigate to the url it returns to access your web UI.

PuTTy AWS no such file or directory

PuTTy AWS no such file or directory
1 - created the ec2 instance at AWS ubuntu
2 - downloaded the key (.pem file)
3 - since Im using windows, I downloaded PuTTy
4 - generate a putty file
5 - Im logged in with Putty (login as: ubuntu
Authenticating with public key "imported-openssh-key"
)
6 - Now need to run:
cd path/to/my/dev/folder/
chmod 400 JupyterKey.pem
ssh ubuntu#11-111-111 -i JupyterKey.pem
# Doesn't work!!
so Im conected to putty and now Im trying open the key(automation.pem) to conect with server AWS to start build my jupyter notebooks
# First attempt
[ec2-user#ip-111-11-11-111 ~]$ cd \Users\pb\Desktop\pYTHON\AWS\server
-bash: cd: UserspbDesktoppYTHONAWSserver: No such file or directory
# Second attempt
[ec2-user#ip-111-11-11-111 ~]$ ssh -i "imported-openssh-key" ubuntu#ec2-54-67-50-191.us-west-1.compute.amazonaws.com
Warning: Identity file imported-openssh-key not accessible: No such file or directory.
The authenticity of host 'ec2-ip-111-11-11-111.us-west-1.compute.amazonaws.com (ip-111-11-11-111)' can't be established.
ECDSA key fingerprint is 11111111111111111111111111111111111111111111111111111.
ECDSA key fingerprint is 11111111111111111111111111111111111111111111111111111.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-11-111-111.us-west-1.compute.amazonaws.com,11-111-1191' (ECDSA) to the list of known hosts.
Permission denied (publickey).
[ec2-user#ip-172-31-28-150 ~]$
Your cd command did not work because in Linux like files systems the directory seperator is a / and not a . A \ indicates that it's a special character \n for newline or \r for carriage return. Also Linux like file systems are case sensitive.
I say Linux Like because this applies to just about everything except Windows, including the Windows Linux Subsystem, Mac's, Any Unix flavor (Linux, BSD, etc...)
In your second attempt there is no file named imported-openssh-key in your current directory. You need to have the file with the key in the directory you are trying to use ssh with the -i option.
The more typical way to use ssh is in your home directory (You can get to it with cd ~ in most linux like systems) You create a directory called .ssh and store your keys in there and configure a file to know how to access them.
Also I believe there is now native SSH support in Windows, so you probably don't need to jump through the putty hoops anymore.
If the key file isn't on the server you will need to copy it to the Ubuntu server using scp
Hope this helps

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Ansible with Github: Permission denied (Publickey)

I'm trying to understand the GitHub ssh configuration with Ansible (I'm working on the Ansible: Up & Running book). I'm running into two issues.
Permission denied (publickey) -
When I first ran the ansible-playbook mezzanine.yml playbook, I got a permission denied:
failed: [web] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
msg: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
FATAL: all hosts have already failed -- aborting
Ok, fair enough, I see several people have had this problem. So I jumped to appendix A on running Git with SSH and it said to run the ssh-agent and add the id_rsa public key:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa
Output: Identity AddedI ran ssh-agent -l to check and got the long string: 2048 e3:fb:... But I got the same output. So I checked the Github docs on ssh key generations and troubleshooting which recommended updating the ssh config file on my host machine:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes
But this still provides the same error. So at this point, I start thinking it's my rsa file, which leads me to my second problem.
Key Generation Issues - I tried to generate an additional cert to use, because the Github test threw another "Permission denied (publickey)" error.
Warning: Permanently added the RSA host key for IP address '192.30.252.131' to the list of known hosts.
Permission denied (publickey).
I followed the Github instructions from scratch and generated a new key with a different name.
ssh-keygen -t rsa -b 4096 -C "me#example.com"
I didn't enter a passphrase and saved it to the .ssh folder with the name git_rsa.pub. I ran the same test and got the following:
$ ssh -i ~/.ssh/git_rsa.pub -T git#github.com
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for '/Users/antonioalaniz1/.ssh/git_rsa.pub' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: ~/.ssh/github_rsa.pub
Permission denied (publickey).
I checked on the permissions and did a chmod 700 on the file and I still get Permission denied (publickey). I even attempted to enter the key into my Github account, but first got a message that the key file needs to start with ssh-rsa. So I started researching and hacking. Started with just entering the long string in the file (it started with --BEGIN PRIVATE KEY--, but I omitted that part after it failed); however, Github's not accepting it, saying it's invalid.
This is my Ansible command in the YAML file:
- name: check out the repository on the host
git: repo={{ repo_url }} dest={{ proj_path }} accept_hostkey=yes
vars:
repo_url: git#github.com:lorin/mezzanine-example.git
This is my ansible.cfg file with ForwardAgent configured:
[defaults]
hostfile = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes
The box is an Ubuntu Trusty64 using Mac OS. If anyone could clue me into the file permissions and/or Github key generation, I would appreciate it.
I suspect the key permissions issue is because you are passing the public key instead of the private key as the arugment to "ssh -i". Try this instead:
ssh -i ~/.ssh/git_rsa -T git#github.com
(Note that it's git_rsa and not git_rsa.pub).
If that works, then make sure it's in your ssh-agent. To add:
ssh-add ~/.ssh/git_rsa
To verify:
ssh-add -l
Then check that Ansible respects agent forwarding by doing:
ansible web -a "ssh-add -l"
Finally, check that you can reach GitHub via ssh by doing:
ansible web -a "ssh -T git#github.com"
You should see something like:
web | FAILED | rc=1 >>
Hi lorin! You've successfully authenticated, but GitHub does not provide shell access.
I had the same problem, it took me some time, but I have found the solution.
The problem is the URL is incorrect.
Just try to change it to:
repo_url: git://github.com/lorin/mezzanine-example.git
I ran into this issue and discovered it by turning verbosity up on the ansible commands (very very useful for debugging).
Unfortunately, ssh often throws error messages that don't quite lead you in the right direction (aka permission denied is very generic...though to be fair that is often thrown when there is a file permission issue so perhaps not quite so generic). Anyways, running the ansible test command with verbose on helps recreate the issue as well as verify when it is solved.
ansible -vvv all -a "ssh -T git#github.com"
Again, the setup I use (and a typical one) is to load your ssh key into the agent on the control machine and enable forwarding.
steps are found here Github's helpful ssh docs
it also stuck out to me that when I ssh'd to the box itself via the vagrant command and ran the test, it succeeded. So I had narrowed it down to how ansible was forwarding the connection. For me what eventually worked was setting
[paramiko_connection]
record_host_keys = False
In addition to the other config that controls host keys verification
host_key_checking = False
which essentially adds
-o StrictHostKeyChecking=no
to the ssh args for you, and
-o UserKnownHostsFile=/dev/null
was added to the ssh args as well
found here:
Ansible issue 9442
Again, this was on vagrant VMs, more careful consideration around host key verification should be taken on actual servers.
Hope this helps

Categories

Resources