Python - HTTP post from stdin [closed] - python

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
I am getting data in this format every second or so from a bash command 'ibeacon_scan"
ibeacon scan -b | stdin.py
Output:
ibeacon scan...
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -69
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 6 2 -71 -63
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -69
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 5 7 -71 -64
I need to send that information by query string. Here is my code.
#!/usr/bin/python
import fileinput
import httplib
import urllib
for line in fileinput.input():
string = line
string2 = string.split(" ")
parmas = string2
parmas = urllib.urlencode({"UUID": "Major","Minor":"RSSI"})
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("67.205.14.22")
conn.request("POST", "post.php", params, headers)
response = conn.getresponse()
print response.status, response.reason
data = response.read()
print data
conn.close()
I'm getting this error:
Traceback (most recent call last):
File "./stdin.py", line 14, in <module>
conn.request("POST", "post.php", params, headers)
NameError: name 'params' is not defined
Something is wrong with params? How do I format this to correctly accept the 'ibeacon scan' command and send it by HTTP post?

for line in fileinput.input():
string = line
string2 = string.split(" ")
parmas = string2 # you call the variable parmas here
parmas = urllib.urlencode({"UUID": "Major","Minor":"RSSI"})
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("67.205.14.22")
conn.request("POST", "post.php", params, headers) # and params here
You have made what looks to be a typo.

Related

json.decoder.JSONDecodeError Received in Script But Not in Console

I have the following in a script:
import requests
from json import loads
s = requests.Session()
r = s.get("https://iforgot.apple.com/password/verify/appleid", headers=headers)
headers['Sstt'] = loads([line for line in r.text.split("\n") if "sstt" in line][0])["sstt"]
headers['Content-Type'] = "application/json"
data = f'{{"id":"{email}"}}'
r = s.post("https://iforgot.apple.com/password/verify/appleid", data=data, headers=headers, allow_redirects=False).headers['Location']
headers['Accept'] = 'application/json, text/javascript, */*; q=0.01'
r = s.get(f"https://iforgot.apple.com{r}", headers=headers, allow_redirects=False).json()['trustedPhones'][0]
c = r['countryCode']
n = r['numberWithDialCode']
Whenever I run this, I receive this error:
File "/home/user/xsint/modules/apple.py", line 10, in apple
headers['Sstt'] = loads([line for line in r.text.split("\n") if "sstt" in line][0])["sstt"]
File "/usr/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 10 (char 9)
But the thing that I really can't figure out is if I run each of these commands in a Python 3 console they work. Does anyone see what the problem is?
[EDITED]
You have more than one data in your json, and json.loads() is not able to decode more than one.
check below line
headers['Sstt'] = loads([line for line in r.text.split("\n") if "sstt" in line][0])["sstt"]
it must be not a json format change to something like
{
"key" :[value, value]
}
and it should work,
Python json.loads shows ValueError: Extra data
First, I think line 10 and 12 is incomplete?
LINE 10 : data, headers=headers, all$
LINE 12 : json()['trus$
Second, it will be more helpful with more error message.

Weird error with pool module and beautiful soup: Invalid URL 'h'

I am scraping a very large website with Beautiful Soup for a project and want to use the Pool module to speed it up. I am getting a weird error where it is not correctly reading the list of URL's, as far as I can tell it is just grabbing the first 'h'.
The entire code works perfectly if I do not use pool. The list of URL's is read properly. I am not sure if there is something weird about how you have to prepare the URL's when calling p.map(scrapeClauses, links) because if I simply call scrapeClauses(links) everything works.
Here is my main function:
if __name__ == '__main__':
links = list()
og = 'https://www.lawinsider.com'
halflink = '/clause/limitation-of-liability'
link = og + halflink
links.append(link)
i = 0
while i < 50:
try:
nextLink = generateNextLink(link)
links.append(nextLink)
link = nextLink
i += 1
except:
print('Only ', i, 'links found')
i = 50
start_time = time.time()
print(links[0])
p = Pool(5)
p.map(scrapeClauses, links)
p.terminate()
p.join()
#scrapeClauses(links)
and here is scrapeClauses():
def scrapeClauses(links):
#header to avoid site detecting scraper
headers = requests.utils.default_headers()
headers.update({
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
})
#list of clauses
allText = []
number = 0
for line in links:
page_link = line
print(page_link)
page_response = requests.get(page_link, headers=headers)
html_soup = BeautifulSoup(page_response.content, "html.parser")
assignments = html_soup.find_all('div', class_ ='snippet-content')
for i in range(len(assignments)):
assignments[i] = assignments[i].get_text()
#option to remove te assignment that precedes each clause
#assignments[i] = assignments[i].replace('Assignment.','',1)
allText.append(assignments[i])
#change the index of the name of the word doc
name = 'limitationOfLiability' + str(number) + '.docx'
#some clauses have special characters tat produce an error
try:
document = Document()
stuff = assignments[i]
document.add_paragraph(stuff)
document.save(name)
number += 1
except:
continue
I did not include generateNextLink() to save space and because I am pretty sure the error is not in there but if someone thinks it is I will provide it.
As you can see I 'print(page_link) in scrapeClauses. If I am not using pool, it will print all the normal links. But if I use pool, a bunch of h's print out line after line. I then get and error that h is not a valid URL. I will show the error code below.
https://www.lawinsider.com/clause/limitation-of-liability
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\multiproce
ssing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\multiproce
ssing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\wquinn\Web Scraping\assignmentBSScraper.py", line 20, in scrape
Clauses
page_response = requests.get(page_link, headers=headers)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\sessions.py", line 519, in request
prep = self.prepare_request(req)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\sessions.py", line 462, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\models.py", line 313, in prepare
self.prepare_url(url, params)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\models.py", line 387, in prepare_url
raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL 'h': No schema supplied. Perhaps
you meant http://h?
The second argument of p.map get an list. Each such element will be sent to a function. So you function got a string and not a list of string as you expect.
The minimal example is:
from multiprocessing import Pool
def f(str_list):
for x in str_list:
print ('hello {}'.format(x))
if __name__ == '__main__':
str_list = ['111', '2', '33']
p = Pool(5)
p.map(f, str_list)
p.terminate()
p.join()
Output is:
hello 1
hello 1
hello 1
hello 2
hello 3
hello 3

get() takes exactly 1 argument (3 given) [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
#!/usr/bin/env python
import requests, json
userinput = raw_input('Enter a keyword: ')
getparams = {'order':'desc', 'sort':'votes', 'intitle':userinput, 'site':'stackoverflow', 'filter': '!5-HwXhXgkSnzI0yfp0WqsC_-6BehEi(fRTZ7eg'}
r = requests.get('https://api.stackexchange.com/2.2/search', params=getparams)
result = json.loads(r.text)
if result['has_more'] == False:
print 'Error given.'
else:
for looping in result['items']:
print ''
print ''
print 'Title:', looping['title']
#print 'Question:', looping['body']
print 'Link:', looping['link']
if looping['is_answered'] == True:
try:
print 'Answer ID#:', looping['accepted_answer_id']
newparams = {'order':'desc', 'sort':'votes', 'site':'stackoverflow', 'filter': '!4(Yrwr)RRK6oy2JSD'}
newr = requests.get('https://api.stackexchange.com/2.2/answers/', looping['accepted_answer_id'], params=newparams)
newresult = json.loads(newr.text)
print newresult['items'][0]['body']
except KeyError: print 'No answer ID found.'
print ''
print ''
I am trying to make a request such as "https://api.stackexchange.com/2.2/answers/12345 (User inputs 12345)" but I don't know how to do that. And if I include a string it returns error. Help, please?
I am getting this error:
Enter a keyword: php
Title: How can I prevent SQL-injection in PHP?
Link: http://stackoverflow.com/questions/60174/how-can-i-prevent-sql-injection-in-php
Answer ID#: 60496
Traceback (most recent call last):
File "./warrior.py", line 25, in <module>
newr = requests.get('https://api.stackexchange.com/2.2/answers/', looping['accepted_answer_id'], params=newparams)
TypeError: get() takes exactly 1 argument (3 given)
update the request line as
newr = requests.get('https://api.stackexchange.com/2.2/answers/'+str(looping['accepted_answer_id']), params=newparams)
you need to concatenate the looping['accepeted answer']

Loading a JSON in python [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I've got a problem loading a JSON in python. I'm working with python 2.7 and I've got a JSON file that I would like to load. I did:
movies = json.load(open(FBO_REF_FILE, 'r'))
But when I display it I got a dict full of:
{u'id_yeyecine': 42753, u'budget_dollars': u'85', u'classification': u'Tous publics', u'pays': u'US', u'budget_euros': u'0', u'dpw_entrees_fr': 132326, u'realisateurs': u'Brad Peyton, Kevin Lima', u'is_art_et_essai': u'NON', u'distributeur_video': u'Warner hv', u'genre_gfk_1': u'ENFANT', u'genre_gfk_2': u'FILM FAMILLE', u'genre_gfk_3': u'FILM FAMILLE', u'is_3D': u'OUI', u'fid': 16429, u'cum_entrees_pp': 58076, u'titre': u'COMME CHIENS ET CHATS LA REVANCHE DE KITTY GALORE', u'psp_entrees': 963, u'cum_entrees_fr': 348225, u'dps_copies_fr': 453, u'dpj_entrees_pp': 7436, u'visa': 127021, u'dps_entrees_fr': 178908, u'genre': u'Com\xe9die', u'distributeur': u'WARNER BROS.', u'editeur_video': u'Warner bros', u'psp_copies': 15, u'dpw_entrees_pp': 26195, u'id_imdb': None, u'date_sortie_video': u'2010-12-06', u'dps_copies_pp': 39, u'date_sortie': u'2010-08-04', u'dps_entrees_pp': 32913, u'dpj_entrees_fr': 40369, u'ecrivains': u'', u'acteurs': u"Chris O'donnell, Jack McBrayer", u'is_premier_film': u'NON'}
I tried using ast but I got the following error: string malformed. The error I get when using last is the following:
153 if cursor is None:
154 movies = json.load(open(FBO_REF_FILE, 'r'))
--> 155 movies = ast.literal_eval(movies)
156 for movie in movies:
157 if movies[movie]['id_allocine'] == allocine_id:
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ast.pyc in literal_eval(node_or_string)
78 return left - right
79 raise ValueError('malformed string')
---> 80 return _convert(node_or_string)
81
82
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ast.pyc in _convert(node)
77 else:
78 return left - right
---> 79 raise ValueError('malformed string')
80 return _convert(node_or_string)
81
ValueError: malformed string
With json.load you parse a json file into python's datatypes. In your case this is a dict.
With open you load a file.
If you don't want to parse the json file just do the following
content = None
with open(FBO_REF_FILE, 'r') as f:
content = f.read()
print content # content is a string contaning the content of the file
If you want to parse the json file into python's datatypes do the following:
content = None
with open(FBO_REF_FILE, 'r') as f:
content = json.loads(f.read())
print content # content is a dict containing the parsed json data
print content['id_yeyecine']
print content['budget_dollars']
If you want to pretty print your dictionary:
json.dumps(movies, sort_keys=True, indent=4)
Or use pprint: https://docs.python.org/2/library/pprint.html
To read from movies, use regular dict methods:
id_yeyecine = movies["id_yeyecine"]
Now id_yeyecine is 42753.

Python - Accept input from bash and send by query string

I am new to Python and trying to my script to send the output of this command 'ibeacon_scan -b' to be sent to a web server by query string or any other efficient way to send data continuously. Here is what the output looks like for 'ibeacon_scan -b'
iBeacon Scan ...
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -69
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 6 2 -71 -63
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -69
3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 5 7 -71 -64
...keeps updating
I am piping the command to my Python script
ibeacon scan -b | stdin.py
Here is my code for the my script 'stdin.py'
#!/usr/bin/python
import fileinput
import httplib
import urllib
for line in fileinput.input():
urllib.urlencode({"UUID": {"Major":{"Minor":RSSI}}})
headers = {"Content-type": "application/x-www-formurlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("67.205.14.22")
conn.request("POST", "post.php", params, headers)
response = conn.getrespone()
print response.status, respone.reason
data = respone.read()
print data
conn.close()
I'm getting these errors.
Traceback (most recent call last):
File "./stdin.py", line 7, in <module>
for line in fileinput.input():
File "/usr/lib/python2.7/fileinput.py", line 253, in next
line = self.readline()
File "/usr/lib/python2.7/fileinput.py", line 346, in readline
self._buffer = self._file.readlines(self._bufsize)
KeyboardInterrupt
Is my script even getting the data correctly from the pipe? Is the formatting correct for the query string?
As #TheSoundDefense pointed, it must be some KeyboardInterrupt character in the ibeacon's output:
A fast check shows that piping in linux actually works:
>>> cat tmp.txt | python -c "import fileinput; print [line for line in fileinput.input()]"
['a\n', 'b\n', 'c\n', 'd\n']
Where tmp.txt contains 4 lines with a, b, c and d.
Do you need to have it pipe in? Because if not, you can do something like this all in your python script (thanks oliver13 for the idea):
popen = subprocess.Popen(["ibeacon scan -b"], stdout=subprocess.PIPE)
for line in iter(popen.stdout.readline, ""):
urllib.urlencode({"UUID": {"Major":{"Minor":RSSI}}})
headers = {"Content-type": "application/x-www-formurlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("67.205.14.22")
conn.request("POST", "post.php", params, headers)
response = conn.getrespone()
print response.status, respone.reason
data = respone.read()
print data
conn.close()

Categories

Resources