Python time of remote web server - python

In Python, I am trying to get the exact time of a remote server so that I can adjust the timing of a request.
I know that I can use this code to get a "ntp" time:
client = ntplib.NTPClient()
response = client.request('pool.ntp.org')
print( f"request packet sent LOCAL Client time: {datetime.fromtimestamp(response.orig_time, timezone.utc)}")
print( f"request packet received REMOTE Server time: {datetime.fromtimestamp(response.recv_time, timezone.utc)}")
Which returns very helpful information showing that my server is nearly 2 seconds ahead of the NTP server:
request packet sent LOCAL Client time: 2023-02-17
18:14:46.008366+00:00
request packet received REMOTE Server time: 2023-02-17
18:14:44.369423+00:00
Is there a way to get the time of a web server, rather than an NTP server?

Is there a way to get the time of a web server, rather than an NTP
server?
In most case you should be able to find Date in response headers, consider following simple example
import requests # if you do not have it, install it: pip install requests
r = requests.head("http://www.example.com") # ask just for headers
date_str = r.headers["Date"] # retrieve Date value
print(date_str) # e.g. Fri, 17 Feb 2023 18:43:06 GMT
If you need to parse date string you might use email.utils.parsedate (part of standard library).

Related

Mitigating TCP connection resets in AWS Farget

I am using Amazon ECS on AWS Fargate, My instances can access the internet, but the connection drops after 350 seconds. On average, out of 100 times, my service is getting ConnectionResetError: [Errno 104] Connection reset by peer error approximately 5 times. I found a couple of suggestions to fix that issue on my server-side code, see here and here
Cause
If a connection that's using a NAT gateway is idle for 350 seconds or more, the connection times out.
When a connection times out, a NAT gateway returns an RST packet to any resources behind the NAT gateway that attempt to continue the connection (it does not send a FIN packet).
Solution
To prevent the connection from being dropped, you can initiate more traffic over the connection. Alternatively, you can enable TCP keepalive on the instance with a value less than 350 seconds.
Existing Code:
url = "url to call http"
params = {
"year": year,
"month": month
}
response = self.session.get(url, params=params)
To fix that I am currently using a band-aid retry logic solution using tenacity,
#retry(
retry=(
retry_if_not_exception_type(
HTTPError
) # specific: requests.exceptions.ConnectionError
),
reraise=True,
wait=wait_fixed(2),
stop=stop_after_attempt(5),
)
def call_to_api():
url = "url to call HTTP"
params = {
"year": year,
"month": month
}
response = self.session.get(url, params=params)
So my basic question is how can I use python requests correctly to do any of the below solutions,
Close the connection before 350 seconds of inactivity
Enable Keep-Alive for TCP connections
Concerning the "Close the connection before 350 seconds of inactivity" problem, there seems to be a read timeout parameter you can pass to the session.get() function call.
According to the doc "it’s the number of seconds that the client will wait between bytes sent from the server".
Which, to me, looks like an inactivity timeout.
Posting solution for the future user who will face this issue while working on AWS Farget + NAT,
We need to set the TCP keepalive settings to the values dictated by our server-side configuration, this PR helps me a lot to fix my issue: https://github.com/customerio/customerio-python/pull/70/files
import socket
from urllib3.connection import HTTPConnection
HTTPConnection.default_socket_options = ( HTTPConnection.default_socket_options + [
(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1),
(socket.SOL_TCP, socket.TCP_KEEPIDLE, 300),
(socket.SOL_TCP, socket.TCP_KEEPINTVL, 60)
]
)

How to get the ip address of a request machine using python (On Sanic)

I am trying to get the IP address of the requesting computer on my server. I can successfully get the IP-address from request header if the request came from a web browser. The code example below. However, I cannot fetch the client IP, if I send the request via curl/postman. I checked the nginx log, and I found there is a log of my public IP when I sent a curl request. How can I achieve this?
PS: I am using the Sanic Framework.
client_ip = request.headers.get('x-real-ip')
In sanic's docs for Request Data:
ip (str) - IP address of the requester.
Therefore, just use
client_ip = request.ip

How to create a node js API for which users can subscribe to listen to events?

I am trying to create and node.js api to which users can subscribe to get event notifications?
I created the below API and was able to call the API using python ,however its not clear to me how can folks subscribe to it?
How can folks subscribe to this API to get notification of New root build released?what do I need to change?
node.js API
app.get("/api/root_event_notification", (req, res, next) => {
console.log(req.query.params)
var events = require('events');
var eventEmitter = new events.EventEmitter();
//Create an event handler:
var myEventHandler = function () {
console.log('new_root_announced!');
res.status(200).json({
message: "New root build released!",
posts: req.query.params
});
}
import requests
python call
input_json = {'BATS':'678910','root_version':'12A12'}
url = 'http://localhost:3000/api/root_event_notification?params=%s'%input_json
response = requests.get(url)
print response.text
OUTPUT:-
{"message":"New root build released!","posts":"{'root_version': '12A12', 'BATS': '678910'}"}
You can't just postpone sending an http response for an arbitrary amount of time. Both client and server (and sometimes the hosting provider's infrastructure) will timeout the http request after some number of minutes. There are various tricks to try to keep the http connection alive, but all have limitations.
Using web technologies, the usual options for get clients getting updated server data:
http polling (client regularly polls the server). There's also a long polling adaptation version of this that attempts to improve efficiency a bit.
Websocket. Clients makes a websocket connection to the server which is a lasting, persistent connection. Then either client or server can send data/events of this connection at any time, allowing the server to efficiently send notifications to the client at any time.
Server Sent Events (SSE). This is a newer http technology that allows one-way notification from server to client using some modified http technology.
Since a server cannot typically connect directly to a client due to firewall and public IP address issues, the usual mechanism for a server to notify a client is to use either a persistent webSocket connection from client to server over which either side can then send webSocket packets or use the newer SSE (server sent events) which allows some server events to be sent to a client over a long lasting connection.
The client can also "poll" the server repeatedly, but this is not really an event notification system (and not particularly efficient or timely) as much as it is some state that the client can check.

printing URL parameters of a HTTP request using Python + Nginx + uWSGI

I have used this link and successfully run a python script using uWSGI. Although i just followed the doc line by line.
I have a GPS device which is sending data to a remote server. Document of the same device say that it connect to server using TCP which therefore would be http as simple device like a GPS device would not be able to do https (i hope i am right here.) Now as i have configure my Nginx server to forward all incoming HTTP request to python script for processing via uWSGI.
What i want to do is to simply print the url or query string on the HTML page. As i don't have control on the device side (i can only configure device to send data on a IP + Port), i have no clue how data is coming. Below is my access log
[23/Jan/2016:01:50:32 +0530] "(009591810720BP05000009591810720160122A1254.6449N07738.5244E000.0202007129.7200000000L00000008)" 400 172 "-" "-" "-"
Now i have look at this link on how to get the url parameters values, but i don't have a clue that what is the parameter here.
I tried to modified my wsgi.py file as
import requests
r = requests.get("http://localhost.com/")
# or r = requests.get("http://localhost.com/?") as i am directly routing incoming http request to python script and incoming HTTP request might not have #any parameter, just data #
text1 = r.status_code
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return ["<h1 style='color:blue'>Hello There shailendra! %s </h1>" %(text1)]
but when i restarted nginx, i get internal server error. Can some one help me to understand wrong i am doing here (literally i have no clue about the parameters of the application function. Tried to read this link, but what i get from here is that environ argument take care of many CGI environment variables.)
Can some one please help me to figure out what wrong am i doing and guide me to even a doc or resource.
Thanks.
why are you using localhost ".com" ?
Since you are running the webserver on the same machine,
you should change the line to
r = requests.get("http://localhost/")
Also move below lines from wsgi.py and put them in testServerConnection.py
import requests
r = requests.get("http://localhost/")
# or r = requests.get("http://localhost.com/?") as i am directly routing incoming http request to python script and incoming HTTP request might not have #any parameter, just data #
text1 = r.status_code
Start NGINX
and you also might have to run (i am not sure uwsgi set up on nginx)
uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi
run testConnection.py to send a test request to localhost webserver and print the response
i got the answer for my question. Basically to process a TCP request, you need to open a socket and accept the TCP request on a specific port (as you specified on the hardware)
import SocketServer
class MyTCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print "{} wrote:".format(self.client_address[0])
#data which is received
print self.data
if __name__ == "__main__":
#replace IP by your server IP
HOST, PORT = <IP of the server>, 8000
# Create the server, binding to localhost on port 9999
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
After you get the data, you can do any thing with the data. As my data format was shared in the GPS datasheet, i was able to parse the string and get the Lat and long out of it.

In need of a python script to upload a file with IP address every few minutes

I'm using windows server 2008, and one of the things I need to do in order to pair to a domain name is send a file with the computers current IP address (it's not static) to a server via sftp every few minutes. The problem is that I'm not sure how to do this.
I would send it via XMPP. You can set up a listener service for the server.
Send an xmpp message using a python library
Here are some ideas on XMPP servers to run on your IIS server (listening to recieve the incoming messages from clients http://metajack.im/2008/08/26/choosing-an-xmpp-server/
Pretzel looks nice
this python code can be run client side to get the public IP address.
host, aliaslist, lan_ip = socket.gethostbyname_ex(socket.gethostname())
print host
print aliaslist
print lan_ip[0]
Than you would send via XMPP message containing the IP to the server you have set up on your IIS server. Depending on what you want to do with the IP address once it gets to the server, you will handle the message serverside

Categories

Resources