Im working off of the Azure Function HTTP Python Template that has this code:
import os
import json
postreqdata = json.loads(open(os.environ['req']).read())
response = open(os.environ['res'], 'w')
response.write("hello world from "+postreqdata['name'])
response.close()
This all works great. But when trying to implement it in my python script package, the response never gets sent back.
Heres how my code looks:
import os
import json
postreqdata = json.loads(open(os.environ['req']).read())
while True:
mode = 0
if mode == 0:
response = open(os.environ['res'], 'w')
response.write("Mode selected is 0, testing has begun.")
response.close()
test.testing()
As you can see my python script test.testing() is running in its own loop and runs successfully, but I never get the response back. Even if I put the "response" code last.
I simply want to call HTTP POST, which executes the script and get an OUTPUT of the
"Mode selected is 0, testing has begun."
message only once, and let the test.testing() script work its magic while in a loop.
Im pretty sure im not getting the logic right, let me know if anyone can point me in the right direction. Python current version is 2.7 and dont want to upgrade to 3 for this.
Functions are supposed to be short-lived, they shouldn't run long loops, and especially eternal loops.
The response is only sent when your function completed which, in my my understanding, never happens with the code above.
Get rid of While or add a break, and make sure the whole function execution time is below 5 minutes (or whatever your max expected response time is).
Related
As a little project for practice, I am attempting to write a Python script to essentially brute force a coupon code input on the website for a random restaurant. The codes take the form of two capital letters followed by 6 integers. For example: XZ957483. As a starting point, I've written a script which generates an infinite amount of combinations in the correct form. That looks like this:
import random
import string
i = 1
while i>=1:
number = random.randint(100000,999999)
print(random.choice(string.ascii_uppercase)+random.choice(string.ascii_uppercase)+str(number))
i = i+1
How do I actually make the connection to run the piece of code on the webpage and make it continue until it finds the correct combination? And also I was thinking, is it better to run the script and generate all the combinations on the webpage input, or should I write the combinations to a text file and use that as a dictionary to run on the webpage?
Any help is much appreciated!
A real example would be hard (and unethical) to do, as websites will block requests after too many attempts, or use captcha to prevent bots, and so on.
But we can write a simple webserver to implement a simple scenario, where requests to the /cupons/{cupon_id} endpoint may respond with 200 OK if found, or 404 NOT FOUND otherwise.
To do this, we'll need some extra libraries which you can install with pip:
pip install fastapi requests uvicorn
from fastapi import FastAPI, HTTPException, status
VALID_CUPON = "XZ12345"
app = FastAPI()
#app.get("/cupons/{cupon_id}")
def get(cupon_id: str):
if cupon_id == VALID_CUPON:
return {"msg": f"{cupon_id} is a valid cupon!"}
else:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"{cupon_id} is NOT a valid cupon!")
Now we can run this FastAPI app with the uvicorn server. I called it webserver.py, and the app is saved in the variable app, so the argument for uvicorn will be webserver:app:
python3 -m uvicorn webserver:app --reload
The server will be running in the console and we can send requests to it. For example, you can try it in your browser, at http://localhost:8000/cupons/XZ12345.
Now we can write a script to send requests to it and run it in a different console:
import random
import string
import requests
i = 1
while i>=1:
number = random.randint(100000,999999)
random_cupon = random.choice(string.ascii_uppercase)+random.choice(string.ascii_uppercase)+str(number)
response = requests.get(f"http://127.0.0.1:8000/cupons/{random_cupon}")
if response.status_code == 404:
print(f"Failed attempt with {random_cupon}")
if response.status_code == 200:
print(f"Succesful attempt with cupon {random_cupon}!!!")
break
i = i+1
This script will send many requests to your local server and print a message announcing failure or success. If success, it also breaks from the loop and stops.
We can make some improvements:
i is not really needed, as we want it to run forever: we can just use True
In your code, the number starts at 100000, so the cupon "AX000001" would never be found. We can include all possible numbers and then do some string formatting tricks to add leading 0s.
import random
import string
import requests
while True:
letter1 = random.choice(string.ascii_uppercase)
letter2 = random.choice(string.ascii_uppercase)
number = random.randint(0,999999)
random_cupon = f"{letter1}{letter2}{number:06}"
response = requests.get(f"http://127.0.0.1:8000/cupons/{random_cupon}")
if response.status_code == 404:
print(f"Failed attempt with {random_cupon}")
if response.status_code == 200:
print(f"Succesful attempt with cupon {random_cupon}!!!")
break
Keep in mind this will take you a long, long time to finish. With 2 letters and 6 digits, we get 26*26*1000000 combinations, ie 67 million - if each request to your local server takes 1 milisecond, that's going to take an average of 676,000 seconds to complete! A request to a remote webserver is likely to take much longer, eg at least 50ms, likely more.
So it may be better to start with smaller values, eg 2 letters and 1 digits. To do this, you'll need to change:
the valid cupon in the server
the random generator
the string formatting which adds leading zeroes
You could try to check if their is an API where you can make post calls too with a different coupon code each time. However I'm not sure if this is ethical and I doubt it will work.
This is because you're trying to generate a coupon code with about half a billion possibilities. No server will allow you to try to brute force this, and if they allow this it will most likely crash before a result is achieved.
I have a c exectuable that I want to exploit.
The output of that file looks like this:
$ ./vuln_nostack
Enter some text:
enteringTEXT
You entered: enteringTEXT
You enter some text, and the program spits it back.
I want to run this prorgam (and later exploit it) with python and pwntools.
So far, the functioning part of my pwntools program looks like this:
from concurrent.futures import process
from sys import stdout
from pwn import *
import time
pty = process.PTY
p = process("./vuln_nostack", stdin=pty, stdout=pty)
ss = p.recv()
p.clean()
asstring = ss.decode("utf-8")
print(asstring)
This works fine, it gets the first line and then prints it.
What I want to do now is to send a message to the program and then get the final line.
I have tried something along these lines:
p.send(b"dong")
p.clean()
print(p.recv())
I'm not sure whether or not the send actually ever sends anything, but as soon as I add the recv function, the prorgam just hangs and never finishes.
My guess is that the input to the executable is never given properly, and therefore it's still just waiting.
How do I actually get a message delivered to the exectuable so that it can move on and srever me the last line?
You can also use p.sendline():
p.sendline("payload")
This automatically adds a breakline after your bytes.
Moreover, to know whether your exploit is sending/receiving messages to/from the program, you can use debug context by adding this assignment:
context.log_level = 'debug'
The answer was a lot more simple than formerly presumed.
I just needed a breakline in the send:
p.send("payload \n")
I write a Python script to manage my account on a webpage automatically.
Code Description:
The script has a while loop and at the end of the loop, it waits 12 hours before starting again.
Before the while loop starts, it's logging in to my account, and when entering the while loop, it checks if I'm still logged in. If not, it's logging in to my account again.
Problem:
After re-entering the while loop (first time everything goes fine), the script does only work, when print("Name is:") and print(name) is at the very beginning. I tested it several times and maybe it is just a bug/glitch, which was just unlucky to be caused only when the print statements aren't there, but this is very confusing me right now, how those print statements fixed my issue. I would like to know, what is or could causing the issue and how do I have to solve it properly?
Some side info:
The webpage is saving the login credentials through session cookies with a lifetime of ~6 hours. So after re-entering the script loop again, I'm not logged in for sure. If I'm reducing the wait time to 30 minutes instead of 12 hours, the script works also without the print statements.
General notes:
The script is running through nohup on my Raspberry Pi 3
Python version is 3.7.3
Code related notes:
I'm using the post method from requests to log in to my account
For checking, if I'm still logged in, I'm using beautifulSoup4
The following code is abbreviated and in a very basic shape.
"account" is an instance of a self-made class. When instantiating, it is log in itself with arguments, if given
This is the core code:
import time
import requests
from account import Account # costum made class
from bs4 import BeautifulSoup
# login credentials
name = "lol" # I replaced them with placeholders
pw = "lol"
account = Account(name, pw) # instantiating an account class and log in itself with given arguments
while True: # script loop
print("name is:") # Without those both print statements,
print(name) # the code won't work
if not account.stillAlive(): # if not signed in anymore ...
account.login(name, pw) # ... sign in again
account.doStuff() # Do the automating stuff
time.sleep(43200) # Wait 12 hours, before entering the while loop again
This is the doStuff() method from the Account class:
def doStuff(self):
html = requests.get("example.com").text # Note: example.com is only for demonstration purpose only
crawler = BeautifulSoup(html, "lxml")
lol = crawler.find("input", attrs={"name": "Submit2"}).get("value")
# ...
Error message:
So, if I'm executing the program without the print statements, I'm getting this error:
lol = crawler.find("input", attrs={"name": "Submit2"}).get("value")
AttributeError: 'NoneType' object has no attribute 'get'
This does not occur when executing with the print statements. With the print statements, the code runs fine.
My guess
My guess is, what the memory management of Python is deleting the name variable. When entering the script loop in the first time, I'm already logged in and therefore it is skipping the account.login(name, pw) part. Since this is the only part, where name is continued to be, maybe Python is interpreting this as dead code after too many time has passed without the line to be executed, and don't see the reason to keep the name/pw variable and deletes them. Still, I'm just an amateur and I don't have any expertise in this segment.
Side notes:
This is my first question I'm submitting, if I forgot or did something wrong, pls tell me.
I already searched for this problem, but I didn't find anything similar. Maybe I just searched badly, but I searched for a few hours now. If so, I apologize. (I had to wait for every test 12 hours and since I tried it several times, you can tell, I had some time available to search)
I'm trying to develop a very simple proof-of-concept to retrieve and process data in a streaming manner. The server I'm requesting from will send data in chunks, which is good, but I'm having issues using httplib to iterate through the chunks.
Here's what I'm trying:
import httplib
def getData(src):
d = src.read(1024)
while d and len(d) > 0:
yield d
d = src.read(1024)
if __name__ == "__main__":
con = httplib.HTTPSConnection('example.com', port='8443', cert_file='...', key_file='...')
con.putrequest('GET', '/path/to/resource')
response = con.getresponse()
for s in getData(response):
print s
raw_input() # Just to give me a moment to examine each packet
Pretty simple. Just open an HTTPS connection to server, request a resource, and grab the result, 1024 bytes at a time. I'm definitely making the HTTPS connection successfully, so that's not a problem at all.
However, what I'm finding is that the call to src.read(1024) returns the same thing every time. It only ever returns the first 1024 bytes of the response, apparently never keeping track of a cursor within the file.
So how am I supposed to receive 1024 bytes at a time? The documentation on read() is pretty sparse. I've thought about using urllib or urllib2, but neither seems to be able to make an HTTPS connection.
HTTPS is required, and I am working in a rather restricted corporate environment where packages like Requests are a bit tough to get my hands on. If possible, I'd like to find a solution within Python's standard lib.
// Big Old Fat Edit
Turns out in my original code I had simply forgot to update the d variable. I initialized it with a read outside the yield loop and never changed it in the loop. Once I added it back in there it worked perfectly.
So, in short, I'm just a big idiot.
Is your con.putrequest() actually working? Doing a request with that method requires you to also call a bunch of other methods as you can see in the official httplib documentation:
http://docs.python.org/2/library/httplib.html
As an alternative to using the request() method described above, you
can also send your request step by step, by using the four functions
below.
putrequest()
putheader()
endheaders()
send()
Is there any reason why you're not using the default HTTPConnection.request() function?
Here's a working version for me, using request() instead:
import httlplib
def getData(src, chunk_size=1024):
d = src.read(chunk_size)
while d:
yield d
d = src.read(chunk_size)
if __name__ == "__main__":
con = httplib.HTTPSConnection('google.com')
con.request('GET', '/')
response = con.getresponse()
for s in getData(response, 8):
print s
raw_input() # Just to give me a moment to examine each packet
You can use the seek command to move the cursor along with your read.
This is my attempt at the problem. I apologize if I made it less pythonic in process.
if __name__ == "__main__":
con = httplib.HTTPSConnection('example.com', port='8443', cert_file='...', key_file='...')
con.putrequest('GET', '/path/to/resource')
response = con.getresponse()
c=0
while True:
response.seek(c*1024,0)
data =d.read(1024)
c+=1
if len(data)==0:
break
print data
raw_input()
I hope it is at least helpful.
I am looking for a python snippet to read an internet radio stream(.asx, .pls etc) and save it to a file.
The final project is cron'ed script that will record an hour or two of internet radio and then transfer it to my phone for playback during my commute. (3g is kind of spotty along my commute)
any snippits or pointers are welcome.
The following has worked for me using the requests library to handle the http request.
import requests
stream_url = 'http://your-stream-source.com/stream'
r = requests.get(stream_url, stream=True)
with open('stream.mp3', 'wb') as f:
try:
for block in r.iter_content(1024):
f.write(block)
except KeyboardInterrupt:
pass
That will save a stream to the stream.mp3 file until you interrupt it with ctrl+C.
So after tinkering and playing with it Ive found Streamripper to work best. This is the command i use
streamripper http://yp.shoutcast.com/sbin/tunein-station.pls?id=1377200 -d ./streams -l 10800 -a tb$FNAME
If you find that your requests or urllib.request call in Python 3 fails to save a stream because you receive "ICY 200 OK" in return instead of an "HTTP/1.0 200 OK" header, you need to tell the underlying functions ICY 200 OK is OK!
What you can effectively do is intercept the routine that handles reading the status after opening the stream, just before processing the headers.
Simply put a routine like this above your stream opening code.
def NiceToICY(self):
class InterceptedHTTPResponse():
pass
import io
line = self.fp.readline().replace(b"ICY 200 OK\r\n", b"HTTP/1.0 200 OK\r\n")
InterceptedSelf = InterceptedHTTPResponse()
InterceptedSelf.fp = io.BufferedReader(io.BytesIO(line))
InterceptedSelf.debuglevel = self.debuglevel
InterceptedSelf._close_conn = self._close_conn
return ORIGINAL_HTTP_CLIENT_READ_STATUS(InterceptedSelf)
Then put these lines at the start of your main routine, before you open the URL.
ORIGINAL_HTTP_CLIENT_READ_STATUS = urllib.request.http.client.HTTPResponse._read_status
urllib.request.http.client.HTTPResponse._read_status = NiceToICY
They will override the standard routine (this one time only) and run the NiceToICY function in place of the normal status check when it has opened the stream. NiceToICY replaces the unrecognised status response, then copies across the relevant bits of the original response which are needed by the 'real' _read_status function. Finally the original is called and the values from that are passed back to the caller and everything else continues as normal.
I have found this to be the simplest way to get round the problem of the status message causing an error. Hope it's useful for you, too.
I am aware this is a year old, but this is still a viable question, which I have recently been fiddling with.
Most internet radio stations will give you an option of type of download, I choose the MP3 version, then read the info from a raw socket and write it to a file. The trick is figuring out how fast your download is compared to playing the song so you can create a balance on the read/write size. This would be in your buffer def.
Now that you have the file, it is fine to simply leave it on your drive (record), but most players will delete from file the already played chunk and clear the file out off the drive and ram when streaming is stopped.
I have used some code snippets from a file archive without compression app to handle a lot of the file file handling, playing, buffering magic. It's very similar in how the process flows. If you write up some sudo-code (which I highly recommend) you can see the similarities.
I'm only familiar with how shoutcast streaming works (which would be the .pls file you mention):
You download the pls file, which is just a playlist. It's format is fairly simple as it's just a text file that points to where the real stream is.
You can connect to that stream as it's just HTTP, that streams either MP3 or AAC. For your use, just save every byte you get to a file and you'll get an MP3 or AAC file you can transfer to your mp3 player.
Shoutcast has one addition that is optional: metadata. You can find how that works here, but is not really needed.
If you want a sample application that does this, let me know and I'll make up something later.
In line with the answer from https://stackoverflow.com/users/1543257/dingles (https://stackoverflow.com/a/41338150), here's how you can achieve the same result with the asynchronous HTTP client library - aiohttp:
import functools
import aiohttp
from aiohttp.client_proto import ResponseHandler
from aiohttp.http_parser import HttpResponseParserPy
class ICYHttpResponseParser(HttpResponseParserPy):
def parse_message(self, lines):
if lines[0].startswith(b"ICY "):
lines[0] = b"HTTP/1.0 " + lines[0][4:]
return super().parse_message(lines)
class ICYResponseHandler(ResponseHandler):
def set_response_params(
self,
*,
timer = None,
skip_payload = False,
read_until_eof = False,
auto_decompress = True,
read_timeout = None,
read_bufsize = 2 ** 16,
timeout_ceil_threshold = 5,
) -> None:
# this is a copy of the implementation from here:
# https://github.com/aio-libs/aiohttp/blob/v3.8.1/aiohttp/client_proto.py#L137-L165
self._skip_payload = skip_payload
self._read_timeout = read_timeout
self._reschedule_timeout()
self._timeout_ceil_threshold = timeout_ceil_threshold
self._parser = ICYHttpResponseParser(
self,
self._loop,
read_bufsize,
timer=timer,
payload_exception=aiohttp.ClientPayloadError,
response_with_body=not skip_payload,
read_until_eof=read_until_eof,
auto_decompress=auto_decompress,
)
if self._tail:
data, self._tail = self._tail, b""
self.data_received(data)
class ICYConnector(aiohttp.TCPConnector):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._factory = functools.partial(ICYResponseHandler, loop=self._loop)
This can then be used as follows:
session = aiohttp.ClientSession(connector=ICYConnector())
async with session.get("url") as resp:
print(resp.status)
Yes, it's using a few private classes and attributes but this is the only solution to change the handling of something that's part of HTTP spec and (theoretically) should not ever need to be changed by the library's user...
All things considered, I would say this is still rather clean in comparison to monkey patching which would cause the behavior to be changed for all requests (especially true for asyncio where setting before and resetting after a request does not guarantee that something else won't make a request while request to ICY is being made). This way, you can dedicate a ClientSession object specifically for requests to servers that respond with the ICY status line.
Note that this comes with a performance penalty for requests made with ICYConnector - in order for this to work, I am using the pure Python implementation of HttpResponseParser which is going to be slower than the one that aiohttp uses by default and is written in C. This cannot really be done differently without vendoring the whole library as the behavior for parsing status line is deeply hidden in the C code.