pytest - simulate network failure - python

I am attempting an integration test using pytest for my application developed in python3.7 and asyncio. The application is supposed to connect to a remote server and if the network fails, my application should detect this and attempt to reconnect at a specified interval. Typically, in my integration test, I have my remote server already running and listening on a TCP port. My application should connect to that port and I will check that the connection was successful. Then I need to simulate a network outage in which application looses connection to the server and test the behaviour of the application while the network is not operational and then I need to bring the network back online and confirm that the app will properly reconnect and perform it's tasks. For the purposes of my integration testing all this stuff is running on my localhost.
Does pytest already have something for this usecase or should I build some sort of proxy server myself? How would I go about doing this?

Pytest doesn't have any feature to simulate network failure because it's just test runner.
You need to use external mock-server that can emulate connection failure or long response time. For this purpose I use and recommend mock-server Mountebank: http://www.mbtest.org/
With Mountebank you will be able to emulate any response from remote server. You can manage Mountebank behavior directly by API or use client libraries: http://www.mbtest.org/docs/clientLibraries

I know this is an old question, and the solution is not related to pytest, but due to the way the title is formulated, it may be useful for people who come here for a similar reason. There is a library that simulates an internet failure. The original repository has a bug with the main library, but in the fork I have done it is fixed.
Original repository
My fork

Related

How to detect of my python script is getting proxied/debugged

I have a client based application written in python. It is using some sensitive api's. A way to prevent people to find these would be to check for known debbuger processes running, but this can easily be tricked by renaming the process or runnning the script on a pc, getting debugged by an external device checking the traffic.
Would there be a way to detect if the internet connection is running trough a normal ip and not a proxy or the internet traffic is being watched?
I not looking for a specific pythonic way, just a general solution that I can convert into a python script later.

Understanding smb and DCERPC for remote command execution capabilities

I'm trying to understand all the methods available to execute remote commands on Windows through the impacket scripts:
https://www.coresecurity.com/corelabs-research/open-source-tools/impacket
https://github.com/CoreSecurity/impacket
I understand the high level explanation of psexec.py and smbexec.py, how they create a service on the remote end and run commands through cmd.exe -c but I can't understand how can you create a service on a remote windows host through SMB. Wasn't smb supposed to be mainly for file transfers and printer sharing? Reading the source code I see in the notes that they use DCERPC to create this services, is this part of the smb protocol? All the resources on DCERPC i've found were kind of confusing, and not focused on its service creating capabilities. Looking at the sourcecode of atexec.py, it says that it interacts with the task scheduler service of the windows host, also through DCERPC. Can it be used to interact with all services running on the remote box?
Thanks!
DCERPC (https://en.wikipedia.org/wiki/DCE/RPC) : the initial protocol, which was used as a template for MSRPC (https://en.wikipedia.org/wiki/Microsoft_RPC).
MSRPC is a way to execute functions on the remote end and to transfer data (parameters to these functions). It is not a way to directly execute remote OS commands on the remote side.
SMB (https://en.wikipedia.org/wiki/Server_Message_Block ) is the file sharing protocol mainly used to access files on Windows file servers. In addition, it provides Named Pipes (https://msdn.microsoft.com/en-us/library/cc239733.aspx), a way to transfer data between a local process and a remote process.
One common way for MSRPC is to use it via Named Pipes over SMB, which has the advantage that the security layer provided by SMB is directly approached for MSRPC.
In fact, MSRPC is one of the most important, yet very less known protocols in the Windows world.
Neither MSRPC, nor SMB has something to do with remote execution of shell commands.
One common way to execute remote commands is:
Copy files (via SMB) to the remote side (Windows service EXE)
Create registry entries on the remote side (so that the copied Windows Service is installed and startable)
Start the Windows service.
The started Windows service can use any network protocol (e.g. MSRPC) to receive commands and to execute them.
After the work is done, the Windows service can be uninstalled (remove registry entries and delete the files).
In fact, this is what PSEXEC does.
All the resources on DCERPC i've found were kind of confusing, and not
focused on its service creating capabilities.
Yes, It’s just a remote procedure call protocol. But it can be used to start a procedure on the remote side, which can just do anything, e.g. creating a service.
Looking at the sourcecode of atexec.py, it says that it interacts with
the task scheduler service of the windows host, also through DCERPC.
Can it be used to interact with all services running on the remote
box?
There are some MSRPC commands which handle Task Scheduler, and others which handle generic service start and stop commands.
A few final words at the end:
SMB / CIFS and the protocols around are really complex and hard to understand. It seems ok trying to understand how to deal with e.g. remote service control, but this can be a very long journey.
Perhaps this page (which uses Java for trying to control Windows service) may also help understanding.
https://dev.c-ware.de/confluence/pages/viewpage.action?pageId=15007754

Using client and server websockets in same python script

I am new to Python - and work on Slackware Linux with Python 3.4.3. I prefer simple no-dependency solutions within one single python script.
I am building a demonized server program (A) which I need to access through both a regular shell CLI and GUIs in my web browser: it serves various files, uses a corresponding database and updates a firefox tab through python's WEBBROWSER function. Currently, I access process (A) via the CLI or a threaded network socket. This all started to work in a localhost scenario with all processes running on one machine.
Now, it turns out that the WebSocket protocol would render my setup dramatically simpler and cut short on traditional flow protocols using Apache and complex frameworks as middlemen.
1st central question: How do I access daemon (A) with websockets from the CLI? I thought about firing up a non-daemon version of my server program, now called (B), and send a program call to its (A) counterpart via the WebSocket HTTP protocol. This would make process (B) a websocket CLIENT, and process (A) a websocket SERVER. Is such a communication at all possible today?
2nd question: Which is the best suited template solution for this scenario - that works with python 3.4.3 ?! I started to play with Pithikos' very sleek python-websocket-server template (see https://github.com/Pithikos/python-websocket-server) but I am unable to use it as CLIENT (initiating the network call) to call its SERVER equivalent (receiving the call while residing in a daemonized process).
Problem 'solved': I gave up on the zero-dependency zero-library idea :
pip install websockets
https://websockets.readthedocs.io
It works like a charm. The WebSocket server sits in the daemon process and receives and processes WebSocket client calls that come from the CLI processes and from the HTML GUIs.

Writing a proxyfier application with python

I'm new to coding in Python and what motivates me to start coding is the idea of writing a piece of software that will connect to a proxy server via SSH and then once connected will route all network traffic of the system trough it, seamlessly to the user.
I am actually using the paramiko module to connect to the server and it works fine, but now I would like to know if there is some way to make the system change its socks proxy configuration so I can route the traffic to the proxy, on a way the user doesn't need to do anything. Is there any existing module that will help on this task ?
Thank you.
You can see the existing project sshuttle, it transfers all traffic over ssh.

What is the "nice" way to debug django when requests originate remotely?

When someone is remotely hitting a Django server (say, not with a browser, but with a robot or other automated tool), what is the "nice" way for me to trace what the server is doing, and attempt to debug any problems?
What you should do
Debugging should not be done on a production server, so you should use a development server, where you can basically use manage.py runserver+ import pdb; pdb.set_trace().
Why couldn't you do it
Say your dev server is running on a platform like heroku, you might not be able to control how your script is started. From there, using remote-debugging is possible, and here's how you could do it:
What you could do
If you want to be able to step-in code execution and debug remotely (which is totally innapropriate for a production setup), you could use rpdb. I insist on the fact that you shouldn't be doing this unless you know what you're doing (and provided you're not doing it on a production server!)
Basically, what rpdbdoes is that when you call rpdb.set_trace(), pdb is started and its stdinand stdoutare redirected towards port 4444 (but you can change that of course). You'd then telnet (or netcat, for that matter) to that port and do your debugging thing from there.
Closing words
Really, you shouldn't be doing this.

Categories

Resources