Temporary ftp server for testing - python

I want to write a test for my code which uses an FTP library and does upload data via FTP.
I would like to avoid the need for a real FTP server in my test.
What is the most simple way to test my code?
There are several edge-cases which I would like to test.
For example, my code tries to create a directory which already exists.
I want to catch the exception and do appropriate error handling.
I know that I could use the mocking library. I used it before. But maybe there is a better solution for this use case?
Update Why I don't want to do mocking: I know that I could use mocking to solve this. I could mock the library I use (I use ftputil from Stefan Schwarzer) and test my code this way. But what happens if I change my code and use a different FTP library in the future? Then I would need to re-write my testing code, too. I am lazy. I want to be able to rewrite the real code I am testing without touching the test code. But maybe I am still missing a cool way to use mocking.
Solved with https://github.com/tbz-pariv/ftpservercontext

Firstly to hey this or of the way. You aren't asking about Mocking, your question is about Faking.
Fake, an implementation of an interface, which expresses correct behaviour, but cannot be used in production.
Mock, an implementation of an interface that responds to interactions based on a scripted (script as in movie script, not uncompiled code) response.
Stub, an implementation of an interface lacking any real implementation. Usually used in mcguffin style tests.
Notice that in every case the word "interface" is used.
Your question asks how to Fake a TCP port such that the behaviour is a FTP server, with STATE of a rw filesystem underneath.
This is hard.
It is much easier to MOCK an internal interface that throws when you call the mkdir function.
If you must FAKE a FTP server. I suggest creating a docker container with the server in the state you want and use docker to handle the repeatability and lifecycle of the FTP server.

ContextManager:
class FTPServerContext(object):
banner = 'FTPServerContext ready'
def __init__(self, directory_to_serve):
self.directory_to_serve = directory_to_serve
def __enter__(self):
cmd = ['serve_directory_via_ftp']
self.pipe = subprocess.Popen(cmd, cwd=self.directory_to_serve)
time.sleep(2) # TODO check banner via https://stackoverflow.com/a/4896288/633961
def __exit__(self, *args):
self.pipe.kill()
console_script:
def serve_directory_via_ftp():
# https://pyftpdlib.readthedocs.io/en/latest/tutorial.html
authorizer = DummyAuthorizer()
authorizer.add_user('testuser-ftp', 'testuser-ftp-pwd', '.', perm='elradfmwMT')
handler = FTPHandler
handler.authorizer = authorizer
handler.banner = testutils.FTPServerContext.banner
address = ('localhost', 2121)
server = FTPServer(address, handler)
server.serve_forever()
Usage in test:
def test_execute_job_and_create_log(self):
temp_dir = tempfile.mkdtemp()
with testutils.FTPServerContext(temp_dir) as ftp_context:
execute_job_and_create_log(...)
Code is in the public domain under any license you want. It would great if you make this a pip installable package at pypi.org.

Related

Is there a python equivalent for .isConnected functionality in C#

I have been looking for the equivalent of [isConnected() functionality] of C# in PYTHON. https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.socket.connected?view=net-5.0
What I basically need is to check if the socket connection is still open on another end (primary server). If not, move the connection to a new socket(backup server).
I have been quite blank as to how I can move the existing connection to another server without having to reconnect again. Any kind of guidance and help will be really appreciated.
Right now, my connection to server is done at Login(). I want that just in case primary disconnects, the existing username moves to secondary. He just should perform file and word operations. What changes should I do in my code to achieve this.
My current code structure is:
Client side:
def file_operation():
do_something
def word_addition():
do_something
def login():
s.connect(host,port)
if __name__ == "__main__":
Server side:
def accept_client():
s.accept
do_something
def GUI():
accept_client()
if __name__ == "__main__":
According to the Python Sockets documentation, no, there isn't. Python sockets are merely a wrapper around the Berkeley Sockets API, so whatever is in that API, that's what's in the Python API.
If you look at the source code for Microsoft's Socket.cs class, you'll see that they maintain a boolean flag field m_IsConnected to indicate the socket's current connection status. You could potentially do the same with your own custom sockets class, using Microsoft's code as a model for writing your Python code.
Or, you could simply use the socket as you normally would, and switch to a different server when a socket error occurs.

Closing client connection to kubernetes API server in python client

I am using kubernetes-client library in python and looking at the various examples, it appears we don't need to explicitly close the client connection to the API server. Does the client connection gets terminated automatically or are the examples missing the call to close the connection? I also found the docs page for the APIs (AppsV1 for example) and the examples shown there use context manager for the calls so the connection gets disconnected automatically there but I still have questions for the scripts that don't use the context manager approach.
Kubernetes's API is HTTP-based, so you can often get away without explicitly closing a connection. If you have a short script, things should get cleaned up automatically at the end of the script and it's okay to not explicitly close things.
The specific documentation page you link to shows a safe way to do it:
with kubernetes.client.ApiClient(configuration) as api_client:
api_instance = kubernetes.client.AppsV1Api(api_client)
api_instance.create_namespaced_controller_revision(...)
The per-API-version client object is stateless if you pass in an ApiClient to its constructor, so it's safe to create these objects as needed.
The ApiClient class includes an explicit close method, so you could also do this (less safely) without the context-manager syntax:
api_client = kubernetes.client.ApiClient(configuration)
apps_client = kubernetes.client.AppsV1Api(api_client)
...
api_client.close()
The library client front-page README suggests a path that doesn't explicitly create an ApiClient. Looking at one of the generated models' code, if you don't pass an ApiClient option explicitly, a new one will be created for each API-version client object; that includes a connection pool as well. That can leak local memory and cause extra connections to the cluster, but this might not matter to you for small scripts.

How to check if a python module is sending my data when i use it?

The title pretty much says it.
I need to make sure that while I am working with python modules there isn't any sort of malicious code in the module, specifacily the type that scrapes data from the machine runnign the code and sends it elsewhere?
do i have a method of doing that with python?
can i be certain this is done even when i am using modules like requests for sending and receiving HTTP GET\POST requests?
I mean is there a way to check this without reading every line of code in module?
You question is not really connected to python it is more a security risk. Python is a dynamic language so checking if any module behaves correctly is near impossible. However, what you can do it setup a virtual machine sandbox run your program with some fake data and check if guest machine tries to make some strange connections. You can than inspect where data is being send in what format and then trace it back to malicious code fragment in one of the modules.
EDIT
The only other option is if you are sure what method/function the malicious code will use. If it is for example the request library you could patch for example the post() method to check the destination or the package that is being send. However the malicious code could use its own implementation so you cannot be 100% sure.
A link on how to patch post() method
How to unit test a POST method in python?
It's better to have a global approach using tools like Wireshark for example that lets you sniff the packets sent/received by your machine.
With that said, in python, you could overwrite some methods that you're suspicious about. Here's the idea
import requests
def write_to_logs(message):
print(message) # Or you could store in a log file
original_get = requests.get
def mocked_get(*args, **kwargs):
write_to_logs('get method triggered with args = {}, kwargs= {}'.format(args,kwargs))
original_get(*args, **kwargs)
requests.get = mocked_get
response = requests.get('http://google.com')
Output :
get method triggered with args = ('http://google.com',), kwargs= {}

Starting and stopping flask on demand

I am writing an application, which can expose a simple RPC interface implemented with flask. However I want it to be possible to activate and deactivate that interface. Also it should be possible to have multiple instances of the application running in the same python interpreter, which each have their own RPC interface.
The service is only exposed to localhost and this is a prototype, so I am not worried about security. I am looking for a small and easy solution.
The obvious way here seems to use the flask development server, however I can't find a way to shut it down.
I have created a flask blueprint for the functionality I want to expose and now I am trying to write a class to wrap the RPC interface similar to this:
class RPCInterface:
def __init__(self, creating_app, config):
self.flask_app = Flask(__name__)
self.flask_app.config.update(config)
self.flask_app.my_app = creating_app
self.flask_app.register_blueprint(my_blueprint)
self.flask_thread = Thread(target=Flask.run, args=(self.flask_app,),
name='flask_thread', daemon=True)
def shutdown(self):
# Seems impossible with the flask server
raise NotImplemented()
I am using the variable my_app of the current app to pass the instance of my application this RPC interface is working with into the context of the requests.
It can be shut down from inside a request (as described here http://flask.pocoo.org/snippets/67/), so one solution would be to create a shutdown endpoint and send a request with the test client to initiate a shutdown. However that requires a flask endpoint just for this purpose. This is far from clean.
I looked into the source code of flask and werkzeug and figured out the important part (Context at https://github.com/pallets/werkzeug/blob/master/werkzeug/serving.py#L688) looks like this:
def inner():
try:
fd = int(os.environ['WERKZEUG_SERVER_FD'])
except (LookupError, ValueError):
fd = None
srv = make_server(hostname, port, application, threaded,
processes, request_handler,
passthrough_errors, ssl_context,
fd=fd)
if fd is None:
log_startup(srv.socket)
srv.serve_forever()
make_server returns an instance of werkzeugs server class, which inherits from pythons http.server class. This in turn is a python BaseSocketServer, which exposes a shutdown method. The problem is that the server created here is just a local variable and thus not accessible from anywhere.
This is where I ran into a dead end. So my question is:
Does anybody have another idea how to shut down this server easily?
Is there any other simple server to run flask on? Something which does not require an external process and can just be started and stopped in a few lines of code? Everything listed in the flask doc seems to have a complex setup.
Answering my own question in case this ever happens again to anyone.
The first solution involved switching from flask to klein. Klein is basically flask with less features, but running on top of the twisted reactor. This way the integration is very simple. Basically it works like this:
from klein import Klein
from twisted.internet import reactor
app = Klein()
#app.route('/')
def home(request):
return 'Some website'
endpoint = serverFromString(reactor, endpoint_string)
endpoint.listen(Site(app.resource()))
reactor.run()
Now all the twisted tools can be used to start and stop the server as needed.
The second solution I switched to further down the road was to get rid of HTTP as a transport protocol. I switched to JSONRPC on top of twisted's LineReceiver protocol. This way everything got even simpler and I didn't use any of the HTTP stuff anyway.
This is a terrible, horrendous hack that nobody should ever use for any purpose whatsoever... except maybe if you're trying to write an integration test suite. There are probably better approaches - but if you're trying to do exactly what the question is asking, here goes...
import sys
from socketserver import BaseSocketServer
# implementing the shutdown() method above
def shutdown(self):
for frame in sys._current_frames().values():
while frame is not None:
if 'srv' in frame.f_locals and isinstance(frame.f_locals['srv'], BaseSocketServer):
frame.f_locals['srv'].shutdown()
break
else:
continue
break
self.flask_thread.join()

using pyunit on a network thread

I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads.
For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me.
I have to imagine there are methods for unit testing networked threads, but I can't seem to find any.
I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.
Try not to use threads just yet, to simplify testing.
It depends on how your network software is layered and how detailed you want your tests to be, but it's certainly feasible in some scenarios to make server setup and tear-down part of the test. For example, when I was working on the Python logging package (before it became part of Python), I had a test (I didn't use pyunit/unittest - it was just an ad-hoc script) which fired up (in one test) four servers to listen on TCP, UDP, HTTP and HTTP/SOAP ports, and then sent network traffic to them. If you're interested, the distribution is here and the relevant test script in the archive to look at is log_test.py. The Python logging package has of course come some way since then, but the old package is still around for use with versions of Python < 2.3 and >= 1.5.2.
I've some test cases that run a server in the setUp and close it in the tearDown. I don't know if it is very elegant way to do it but it works of for me.
I am happy to have it and it helps me a lot.
If the server init is very long, an alternative would be to automate it with ant. ant would run/stop the server before/after executing the tests.
See here for very interesting tutorial about ant and python
You would need to create mock sockets. The exact way to do that would depend on how you create sockets and creating a socket generator would be a good idea. You can also use a mocking library like pymox to make your life easier. It can also possibly eliminate the need to create a socket generator just for the sole purpose of testing.
Using pymox, you would do something like this:
def test_connect(self):
m = mox.Mox()
m.StubOutWithMock(socket, 'socket')
socket_mock = m.MockAnything()
m.socket.socket(socket.AF_INET, socket.SOCK_STREAM).AndReturn(socket_mock)
socket_mock.connect(('test_server1', 80))
socket_mock.connect(('test_server2', 81))
socket_mock.connect(('test_server3', 82))
m.ReplayAll()
code_to_be_tested()
m.VerifyAll()
m.UnsetStubs()

Categories

Resources