I have a client based application written in python. It is using some sensitive api's. A way to prevent people to find these would be to check for known debbuger processes running, but this can easily be tricked by renaming the process or runnning the script on a pc, getting debugged by an external device checking the traffic.
Would there be a way to detect if the internet connection is running trough a normal ip and not a proxy or the internet traffic is being watched?
I not looking for a specific pythonic way, just a general solution that I can convert into a python script later.
Related
I got a assignment to implement this api and I don't know where to start and I've looked with no clear results.
The closest thing I've found is using ssh to access Linux system and running commands which give me the details about the system from there. I could run a python code to run commands on local host using subprocess library. Then saw somewhere I can't run ssh using a api.
I was hoping someone could tell me where to start or how to go about this.
Thank you.
try to use a backdoor ( I would recommand using python because it's easy) with a client listening ( on your computer), to retreive information about system, do some (relatively real-time ) monitoring ( cmd typing automated !) ....., (infos processing can be done in the client side ).
here is a close example of a (keylooger backdoor ) : https://github.com/JAMAIKA-MHD/Simple-Key-Logger-with-Backdoor
I'm having trouble finding info on this. I've seen numerous posts about sharing data between scripts running in the same environment, but not between environments.
I'm using Anaconda, and 16 envs that use several different python versions—python 3.5 to 3.8-for running various scripts/apps. I'm now working on shell scripts and a master python script that will control the launching of other envs, launching scripts, opening OS native apps and automating scripted tasks, and all will be saving, moving, and accessing data from several master folders on the same machine. I suppose the master script will behave like a mini server, and it also lives in its own env.
What I'm trying to figure out is if there's a way to easily pipe data between the environments, or do I have to store things in yaml or JSON files that they can all access (such as passing custom environment variables from the master env to all the others, or one script letting another know when it has completed, or detecting when a specific terminal window has closed)?
I don't need most scripts to share data between each other directly. I need to have feedback sent from the other envs to the master script that will be in control of everything and print output in its terminal window and fire up shell scripts. I need that master script to communicate with the other envs/scripts to give them new tasks and to load up the envs themselves, and so it knows that it's time to do something else—basically event listener and even handler stuff (I assume) along with watch folders. Most of them will run consecutively. Some scripts will be run at the same time from their own shells from the same environment, processing different data, and at times the master script (or an individual script) will pause and await user feedback in the master script terminal.
It might sound more complicated than it really is as most things happen linearly and the data that needs to be shared is small events and variables. One thing starts and finishes, a watch folder sees new files and the master fires up a new env and launches a script to process them, then the next kicks off, then the next, then 4 of the same thing kick off, then an app opens and runs a task and closes, then user is prompted how to proceed to choose which task runs next, etc. etc..
I found these packages which seem promising for some of the tasks:
python-dotenv and python-dotenv[cli]
watchdog and watchdog[watchmedo]
PyYAML
libyaml
appscript
inquirer
simplejson
cliclick (mac os Terminal commands for executing keyboard commands and mouse movements)
Creating lambda functions seems to be an easy way for executing os/system commands with python. Right now I just need to find the way to get all of these things talking to the master and vice versa, and sharing some data. One app uses jupyter lab, and I'm not sure if it's easy to automate that from another env/script.
I don't need something with a GUI like jupyter lab, and I honestly don't like its UI, and prefer to use a single master terminal window with some simple user input options.
A point in the right direction would be greatly appreciated.
Seems the solution here is to use sockets, create a server, and create clients inside the scripts I need to use. Not sure why my searches weren't bringing up sockets, but it's the solution I needed and doesn't require dependencies.
Sockets are built into python so using import sockets can handle most of what I need.
On top of that, import threading so multiple threads can be used for clients, and I'm using import system to send system commands. The threads are being setup as daemons to avoid any trouble if a client doesn't disconnect cleanly.
This has the benefit of running on a local network but can also be used for more complex system to connect to remote clients and servers. Running locally, your server can use its private IPv4 address to send and receive on one machine or on the intranet.
Tutorial
I found this YouTube video by Tech With Tim going through the complete basic setup, which was a big help as I'm completely new to this.
I ended up setting up classes for the server and client because all the functionality I needed would not work right without it. This video was a good way to get my feet wet, but far from what was needed.
I also created a standalone task manager script which is working better than trying to make the server do everything.
Basic server setup
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(ADDRESS)
After that you need a function to handle client messages and a startup function for the server itself. Using server.listen() to listen for messages and a while loop to handle connections and kick off new threads for each client.
I have my master controller script separate, because I found it cumbersome to have the server running in the same window where I needed user input to take place. So I just programmatically launch, size and position a new Terminal window from the master and load up the server script inside it.
Basic client setup
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDRESS)
As with the server, the client will need a function for sending messages. Tim had a nice approach where you send a small header from the client first letting it know how many bytes the incoming message will be before actually sending the message, to ensure things don't get truncated.
Vars
Using environment variables and an env file really helped streamline this setup. I did this using python-dotenv. Make sure to set the port as an int in the main script or it might error because it sees it as as string.
As I made my scripts more advanced, I ended up placing all my vars and dicts full of vars in a custom module that I load as needed.
HEADER = 64 # header bytes
PORT = int(os.getenv('PORT')) # make this an int
SERVER = os.getenv('SERVER') # server local IP
ADDRESS = (SERVER, PORT)
FORMAT = 'utf-8'
DISCONNECT = '!killClient'
Fist of all, due to Company Policy, Paramiko, or installing anything that requires administrative access to local machine it right out; otherwise I would have just done that.
All I have to work with is python with standard libraries & putty.
I am attempting to automate some tedious work that involves logging into a network device (usually Cisco, occasionally Alcatel-Lucent, or Juniper), running some show commands, and saving the data. (I am planning on using some other scripts to pull data from this file, parse it, and do other things, but that should be irrelevant to the task of retrieving the data.) I know this can be done with telnet, however I need to do this via ssh.
My thought is to use putty's logging ability to record output from a session to a file. I would like to use Python to establish a putty session, send scripted log-in and show commands, and then close the session. Before I set out on this crusade, does anyone know of any way to do this? The closest answers I have found to this all suggest to use Paramiko, or other python ssh library; I am looking for a way to do this given the constraints I am under.
The end-result would ideal be able to be used as a function, so that I can iterate through hundreds of devices from a list of ip addresses.
Thank you for your time and consideration.
If you can't use paramiko, and Putty is all you get so the correct tool is actually not Putty - it's his little brother Plink - you can download it here
Plink is the command line tool for Putty and you can your python script to call it using os.system("plink.exe [options] username#server.com [command])
See MAN Page here
Hope it will help,
Liron
I'm new to coding in Python and what motivates me to start coding is the idea of writing a piece of software that will connect to a proxy server via SSH and then once connected will route all network traffic of the system trough it, seamlessly to the user.
I am actually using the paramiko module to connect to the server and it works fine, but now I would like to know if there is some way to make the system change its socks proxy configuration so I can route the traffic to the proxy, on a way the user doesn't need to do anything. Is there any existing module that will help on this task ?
Thank you.
You can see the existing project sshuttle, it transfers all traffic over ssh.
I have an interactive console application and I need to work with it using Python (send commands and receive output). The application is started by another one, I can't start it from Python script.
Is it possible to connect to already running console application and get access to its stdin/stdout?
Ideally the solution should work both in Windows and Unix, but just Windows version would also be helpful. Currently I am using the solution found here
http://code.activestate.com/recipes/440554/
but it doesn't allow connecting to existing process.
Thanks for any input,
Have you considered using sockets since they are straight forward for simple/streaming. They are also platform independent.
The most critical point is thread safety where having to pass IO streams between threads/processes tends to be hectic.
If on the other hand you use a socket, a lot can be communicated without adding too much complexity to how the processes work(coding an error prone RPC for instance).
try Documentation or
example