I have been using parse as my project server and now that I have migrated the data to local server, successfully made dashboard work; but I cannot find a way to access api.parse.com/1/ API.
I used to use python to make REST requests and it is basically establishes socket connection with api.parse.com at port 443. Now I am trying to connect to localhost at port 1337 which is where the parse-server instance is running. However, I have not been able to access the API same as before.
One thing to note is that I can successfully curl to get basic JSON response from requests like
curl -X GET -H ... http://localhost:1337/parse/classes/_User
The question is which connection now replaces api.parse.com for locally held parse-server instances?
Ahh I found out the answer:
You make socket connection with address and port where the parse-server is running, and then instead of doing
conn.request("GET", "/1/login?%s"%...)
You do:
conn.request("GET", "/parse/login?%s"%...)
Hope this helps.
Related
I am using the following script to get issues from Jira.
from jira import JIRA
options = {'server': 'https://it.company.com/'}
jira = JIRA(options, basic_auth=('user', 'password'), max_retries=1)
issues = jira.search_issues('project="Web"', startAt=0, maxResults=50)
I want to replace https://it.company.com/ with https://ip:port.
I usedping to get the IP.
I used nmap for checking ports, but no matter what https://ip:port input I use, I can't get a connection. I also tried these ports.
How can I find out which IP and Port is JIRA() accessing?
The https protocol uses port 443. Refer to wikipedia for details.
However accessing a server via https://server_name/ is different from accessing a server via https://server_ip_address/. This is because during TLS negotiation, server_name is passed to the server via TLS SNI (Server Name Indication). This way multiple virtual websites may be hosted at the same server_ip_address. See wikipedia for details.
If the script works and you just want to know how the connection looks, I recommend letting it run and in the background execute netstat -ano.
If the script doesn't work and you just want to know where it tries to connect, I recommend installing wireshark.
Edit: In any case you (most likely) won't be able to replace it with ip:port because servers treat HTTP requests to an IP different than how they treat requests to a name.
Ask the Jira admin to tell you. Configured in conf/server.xml like any Tomcat app, or there may be a remote proxy such as nginx configured in front of the Jira
This question already has answers here:
How to send http requests to flask server
(2 answers)
Closed 4 years ago.
I need to send a request to a web server of mine to start a stream. The web server is located at 0.0.0.0 (of course I can change the address).
How can I send a "GET" request to that server?
I already tried using httplib or urllib2 or 3 and they seem not to work with IP address.
I know that a local DNS server will map the address to a url, but that is not the goal to set up the server every time I want to execute the code in a new network.
Thank a lot.
You could use requests:
requests.get('http://0.0.0.0')
or even better
s = requests.Session()
s.get('http://0.0.0.0')
r = s.get('https://httpbin.org/cookies')
to keep the connection persistent, which is probably more like what you want.
See more about requests sessions at http://docs.python-requests.org/en/master/user/advanced/
Or you could just convert the IP address to a hostname using
socket.gethostbyaddr(ip) with urllib
to convert the IP address to a host name
It doesn't work because you probably haven't included the web protocol to use (i.e. HTTP or HTTPS). Try it like this
import urllib2
urllib2.urlopen('http://0.0.0.0')
I have a Scrapy crawler and I want to rotate the IP so my application will not be blocked. I am setting IP in scrapy using request.meta['proxy'] = 'http://51.161.82.60:80' but this is a VM's IP. My question is can VM or Machine's IP be used for scrapy or I need a proxy server?
Currently I am doing this. This does not throw any error but when I get response from http://checkip.dyndns.org it is my own IP not updated IP which I set in meta. That is why I want to know if I do need proxy server.
The reason you are getting your own IP is because your VM is 'transparent'. You will need to intercept your request at the VM, remove tracking headers such as X-Forwarded-For, and your server has to know who to respond to when it receives the response from the website you are crawling.
The simplest solution though, is to install a proxy service on your VM, for example Squid, then set forwarded_for off to make it an anonymous proxy server. There may be other request options to tweak to make it truly anonymous. Remember to secure the whitelisted IP addresses with http_access allow specialIP and acl specialIP src x.x.x.x in /etc/squid/squid.conf. The default port of Squid is 3128.
Definitely you need a proxy server. meta data is only a field in the http request. the server side still knows the public ip that really connecting from the tcp connection layer.
In my Pylons config file, I have:
[server:main1]
port = 9090
...config here...
[server:main2]
port = 9091
...config here...
Which are ran using:
paster serve --server-name=main1 ...(more stuff)...
paster serve --server-name=main2 ...(more stuff)...
Now, using Haproxy and Stunnel, I have all http requests going to main1 and all https requests going to main2. I would like some of my controllers to react a little differently based on if they are being requested under http or https but pylons.request.scheme always thinks that it is under http even when it is not.
Seeing as I always know that main2 is always the one handling all https requests, is there a way for the controller to determine what sever name it was ran under or what id it is?
I got around this by just changing the workflow to not have to react differently based on what protocol it's under. It doesn't look like there's a way to pass a unique arbitrary identifier to each separate process that it can read.
I'm trying to connect to a mongodb instance through a python socket. The url looks like this
username:password#host.com:port
how can I connect to this with a python socket?
The following code gives me this error: [Errno -5] No address associated with hostname
import socket
import tornado
full_url = '%s:%s#%s' % (username, password, host)
s = socket.socket()
s.connect((full_url, port))
stream = iostream.IOStream(s)
EDIT - the reason I ask is because asyncmongo doesn't support this type of url right now. I'm trying to see if I can write a patch. The asyncmongo library connects using a socket like the one in the code above.
You should use a driver to connect to mongodb. If you are using Tornado (it looks like you intend to do so), try asyncmongo; if you are using a threaded web server/application framework (Django, Pylons, etc) you can use PyMongo directly.
Edit: As for why this code doesn't work, the socket module doesn't accept URLs for connection, just hostname and port. It is a low-level library. To connect to (web) urls, consider using urllib2 or httplib.
Edit 2: Authentication in MongoDB is not handled at the transport level, it's handled at the application level. I suggest you first read Implementing Authentication in a Driver, and then take a look at how PyMongo implements authentication (in connection.py and database.py). You'll also need to port or reimplement the MongoDB connection URI parsing for asyncmongo, which is documented here.