Software Validation Server in Python? - python

I have been working on a huge project for work for a while now, and it is almost done. However, in an effort to prevent the program was being pirated (I already know there is pretty much no method that can't be cracked ), the software needs to be able to validate. I'm not exactly sure how to do this. Could some sort of software validation server be written in Python? How would the software communicate with the server? Would the softwre check each time it is launched to see if it is valid? The program requires internet access to run anyway, so checking for validation at each launch might not be so bad.
I am programming in Python 2.6 on Windows 7. Any help would be great!

The software, when starting, should launch an https (so it can't just be sniffed easily;-) request to your server, identifying itself (however it is that you choose to identify, e.g. a serial number or whatever), and the server's response will tell it what to do (run normally, or terminate, or ask the user to register -- whatever).
Of course, any competent hacker will find and disable the part of your code where you're sending the request and dispatching on the answer, but then you already do know that everything can easily be cracked;-).
A less-easily crackable approach would be to keep some crucial part of the functionality on your server, so that the client's basically useless (or at least less useful) if it hasn't checked in with your server and obtained a token to be used in other "functionality requests" during a session.
Hard to tell, without knowing a lot more about your app, if there are bits and pieces of functionality in your app that lend themselves well to this treatment, but for example you could delegate in this way any kind of cryptographic functionality (encrypting, decrypting, signing, ...) -- if only your server knows the secret/private keys to be used for such purposes, and only performs the functionality for application sessions that have properly registered and been authorized, suddenly it's become very hard for even a good hacker to work around your registration and authorization system.

I would really urge you not to do this. As you said, whatever you do will be broken, and you may actually cause more copies of your software to be pirated by including this barrier. Asking your users nicely not to steal may do better...
That said, implementing this in a way that discourages the most casual piracy is easy: just have the program send a serial number encrypted with the server's public key to your validation script, and have the server return a version of the number encrypted using its private key. Instant validation. Yes, this server could be written in Python easily.

Related

Load spike protection for Django Channels

Is there anything specific that can be done to help make a Django Channels server less susceptible to light or accidental DDoS attack or general load increase from websocket/HTTP clients? Since Channels is not truly asynchronous (still workers behind the scenes), I feel like it would be quite easy to take down a Channels-based website - even with fairly simple hardware. I'm currently building an application on Django Channels and will run some tests later to see how it holds up.
Is there some form of throttling built in to Daphne? Should I implement some application-level throttling? This would still be slow since a worker still handles the throttled request, but the request can be much faster. Is there anything else I can do to attempt to thwart these attacks?
One thought I had was to always ensure there are workers designated for specific channels - that way, if the websocket channel gets overloaded, HTTP will still respond.
Edit: I'm well aware that low-level DDoS protection is an ideal solution, and I understand how DDoS attacks work. What I'm looking for is a solution built in to channels that can help handle an increased load like that. Perhaps the ability for Daphne to scale up a channel and scale down another to compensate, or a throttling method that can reduce the weight per request after a certain point.
I'm looking for a daphne/channels specific answer - general answers about DDoS or general load handling are not what I'm looking for - there are lots of other questions on SO about that.
I could also control throttling based on who's logged in and who is not - a throttle for users who are not logged in could help.
Edit again: Please read the whole question! I am not looking for general DDoS mitigation advice or explanations of low-level approaches. I'm wondering if Daphne has support for something like:
Throttling
Dynamic worker assignment based on queue size
Middleware to provide priority to authenticated requests
Or something of that nature. I am also going to reach out to the Channels community directly on this as SO might not be the best place for this question.
I've received an answer from Andrew Godwin. He doesn't use StackOverflow so I'm posting it here on his behalf.
Hi Jamie,
At the moment Channels has quite limited support for throttling - it pretty much consists of an adjustable channel size for incoming connections which, when full, will cause the server to return a 503 error. Workers are load-balanced based on availability due to the channels design, so there's no risk of a worker gaining a larger queue than others.
Providing more advanced DoS or DDoS protection is probably not something we can do within the scope of Channels itself, but I'd like to make sure we provide the appropriate hooks. Were there particular things you think we could implement that would help you write some of the things you need?
(It's also worth bearing in mind that right now we're changing the worker/consumer layout substantially as part of a major rewrite, which is going to mean different considerations when scaling, so I don't want to give too precise advice just yet)
Andrew
He's also written about the 2.0 migration in his blog.
I am only answering the first question. So basically it is impossible to be 100% protected from ddos attacks, because it always comes down to a battle of resources. If the server-side resources are greater than the attacker-side resources, the server will not go down (there may be slowed performance though) but if not, the server goes down [no reference required]. Why is it not possible to be 100% protected, you may ask. So basically your server "crashes" if people cannot connect to it [https://en.wikipedia.org/wiki/Crash_(computing)#Web_server_crashes --- Web server crashes sentence 1.]. So if you try to protect your server by shutting it down for 5 mins every time 10000 connections are made in a second, the ddos succeeded. It "crashed" your server. The only ddos protection that I know of that should work is Cloudfare (https://www.cloudflare.com/lp/ddos-b/?_bt=207028074666&_bk=%2Bddos%20%2Bprotection&_bm=b&_bn=g&gclid=EAIaIQobChMIu5qv4e-Z1QIVlyW9Ch2YGQdiEAAYASAAEgJbQ_D_BwE). It absorbs the impact of the ddos attack with its 10Tbps network backbone. But even it does not offer 100% ddos protection because once its 10Tbps is down, your server will go down too. So, I hope that helped.
DDoS = Distributed Denial of Service
The 'Distributed' part is the key: you can't know you're being attacked by 'someone' in particular, because requests come from all over the place.
Your server will only accept a certain number of connections. If the attacker manages to create so many connections that nobody else can connect, you're being DDoS'ed.
So, in essence you need to be able to detect that a connection is not legit, or you need to be able to scale up fast to compensate for the limit in number of connections.
Good luck with that!
DDoS protection should really be a service from your cloud provider, at the load balancer level.
Companies like OVH use sophisticated machine learning techniques to detect illegitimate traffic and ban the IPs acting out in quasi-real time.
For you to build such a detection machinery is a huge investment that is probably not worth your time (unless your web site is so critical and will lose millions of $$$ if it's down for a bit)
Theres a lot of things you cant to do about DDOS..however there are some neat 'tricks' depending on how much resources you have at your disposal, and how much somebody wants to take you offline.
Are you offering a total public service that requires direct connection to the resource you are trying to protect?
If so, you just going to need to 'soak up' DDOS with the resources you have, by scaling up and out... or even elastic... either way it's going to cost you money!
or make it harder for the attacker to consume your resources. There are number of methods to do this.
If you service requires some kind of authentication, then separate your authentication services from the resource you are trying to protect.
Many applications, the authentication and 'service' run on the same hardware. thats a DOS waiting to happen.
Only let fully authenticated users access the resources you are trying to protect with dynamic firewall filtering rules. If your authenticated then gate to the resources opens (with a restricted QOS in place) ! If your a well known, long term trusted users, then access the resource at full bore.
Have a way of auditing users resource behaviour (network,memory,cpu) , if you see particular accounts using bizarre amounts, ban them, or impose a limit, finally leading to a firewall drop policy of their traffic.
Work with an ISP that can has systems in place that can drop traffic to your specification at the ISP border.... OVH are your best bet. an ISP that exposes filter and traffic dropping as an API, i wish they existed... basically moving you firewall filtering rules to the AS border... niiiiice! (fantasy)
It won't stop DDOS, but will give you a few tools to keep resources wasted a consumption by attackers to a manageable level. DDOS have to turn to your authentication servers... (possible), or compromise many user accounts.... at already authenticated users will still have access :-)
If your DDOS are consuming all your ISP bandwidth, thats a harder problem, move to a larger ISP! or move ISP's... :-). Hide you main resource, allow it to be move dynamically, keep on the move! :-).
Break the problem into pieces... apply DDOS controls on the smaller pieces. :-)
I've tried a most general answer, but there are a lot a of depends, each DDOS mitigation requires a bit of Skin, not tin approach.. Really you need a anti-ddos ninja on your team. ;-)
Take a look at distributed protocols.... DP's maybe the answer for DDOS.
Have fun.
Let's apply some analysis to your question. A DDoS is like a DoS but with friends. If you want to avoid DDoS explotation you need minimize DoS possibilities. Thanks capitan obvious.
First thing is to do is make a list with what happens in your system and wich resources are affected:
A tcp handshake is performed (SYN_COOKIES are affected)
A ssl handshake comes later (entropy, cpu)
A connection is made to channel layer...
Then monitorize each resource and try to implement a counter-measure:
Protect to SYN_FLOOD configuring your kernel params and firewall
Use entropy generators
Configure your firewall to limit open/closed connection in short time (easy way to minimize ssl handshakes)
...
Separate your big problem (DDoS) in many simple and easy to correct tasks. Hard part is get a detailed list of steps and resources.
Excuse my poor english.

Multiple chat rooms - Is using ports the only way ? What if there are hundreds of rooms?

Need some direction on this.
I'm writing a chat room browser-application, however there is a subtle difference.
These are collaboration chats where one person types and the other person can see live ever keystroke entered by the other person as they type.
Also the the chat space is not a single line but a textarea space, like the one here (SO) to enter a question.
All keystrokes including tabs/spaces/enter should be visible live to the other person. And only one person can type at one time (I guess locking should be trivial)
I haven't written a multiple chatroom application. A simple client/server where both are communicatiing over a port is something I've written.
So here are the questions
1.) How is a multiple chatroom application written ? Is it also port based ?
2.) Showing the other persons every keystroke as they type is I guess possible through ajax. Is there any other mechanism available ?
Note : I'm going to use a python framework (web2py) but I don't think framework would matter here.
Any suggestions are welcome, thanks !
The Wikipedia entry for Comet (programming) has a pretty good overview of different approaches you can take on the client (assuming that your client's a web browser), and those approaches suggest the proper design for the server (assuming that the server's a web server).
One thing that's not mentioned on that page, but that you're almost certainly going to want to think about, is buffering input on the client. I don't think it's premature optimization to consider that a multi-user application in which every user's keystroke hits the server is going to scale poorly. I'd consider having user keystrokes go into a client-side buffer, and only sending them to the server when the user hasn't typed anything for 500 milliseconds or so.
You absolutely don't want to use ports for this. That's putting application-layer information in the transport layer, and it pushes application-level concerns (the application's going to create a new chat room) into transport-level concerns (a new port needs to be opened on the firewall).
Besides, a port's just a 16-bit field in the packet header. You can do the same thing in the design of your application's messages: put a room ID and a user ID at the start of each message, and have the server sort it all out.
The thing that strikes me as a pain about this is figuring out, when a client requests an update, what should be sent. The naive solution is to retain a buffer for each user in a room, and maintain an index into each (other) user's buffer as part of the user state; that way, when user A requests an update, the server can send down everything that users B, C, and D have typed since A's last request. This raises all kind of issues about memory usage and persistence that don't have obvious simple solutions
The right answers to the problems I've discussed here are going to depend on your requirements. Make sure those requirements are defined in great detail. You don't want to find yourself asking questions like "should I batch together keystrokes?" while you're building this thing.
You could try doing something like IRC, where the current "room" is sent from the client to the server "before" the text (/PRIVMSG #room-name Hello World), delimited by a space. For example, you could send ROOMNAME Sample text from the browser to the server.
Using AJAX would be the most reasonable option. I've never used web2py, but I'm guessing you could just use JSON to parse the data between the browser and the server, if you wanted to be fancy.

I'm looking for a network service that'll let me send messages to selected clients

I have a program which will be running on multiple devices on a network. These programs will need to send data between each other - to specified devices (not all devices).
server = server.Server('192.168.1.10')
server.identify('device1')
server.send('device2', 'this will be pickled and sent to device2')
That's some basic example code for what I need to do. Of course, it will also need to receive.
I was looking at building my own simple message passing server using Twisted when someone pointed me in the direction of MPI. I've never looked into the MPI protocol before and that website gives rather vague examples.
Is MPI a good approach? Are there better alternatives?
MPI is really good at doing the communications for running a tightly-coupled program accross several or many machines in a cluster. If you're running very loosely coupled programs - only interacting occasionally - or the machines are more distributed than within a cluster, like scattered around a LAN - then MPI is probably not what you're looking for.
There are several Open Source message brokers that already handle this kind of stuff for you, and come with a full API ready to use.
You should take a look at:
ActiveMQ which has a Python Stomp client.
RabbitMQ has a Python client too - see Building RabbitMQ apps using Python.
You could build it yourself, but that would be like reinventing the wheel (and on a side-note: I actually only realised I was half-way building a message broker before I started looking at existing solutions - building one takes a lot of work).
Consider using something like ZeroMQ. It supports the most useful messaging idioms - push/pull, publish/subscribe and so on, and although it's not 100% clear from your question which one you need, I'm pretty sure you will find the answer there.
They have a great user guide here, and the Python bindings are well-developed and supported. Some code samples are here.
You can implement MPI functions in order to create a communication between different codes. In this case the server program should public "MPI ports" with differents IDs. Clients should look for this ports and try to connect to them. Only server can accept each communication. Once the communication is stablished, codes can exchange data between them.
Another posibility is to run different programs in Multiple Instruction MPI option. In this case all programs are executed at the same time, and there is not necessity to create port communicators. After they are executed, you can create particular communicators between groups of programms you select.
Please tell me what kind of method you need and I can provide c code to implement the functions.

OS-independent Inter-program communication between Python and C

I have very little idea what I'm doing here, I've never done anything like this before, but a friend and I are writing competing chess programs and they need to be able to communicate to each other.
He'll be writing mainly in C, the bulk of mine will be in Python, and I can see a few options:
Alternately write to a temp file, or successive temp files. As the communication won't be in any way bulky this could work, but seems like an ugly work-around to me, the programs will have to keep checking for change/new files, it just seems ugly.
Find some way of manipulating pipes i.e. mine.py| ./his . This seems like a bit of a dead end.
Use sockets. But I don't know what I'd be doing, so could someone give me a pointer to some reading material? I'm not sure if there are OS-independent, language independent methods. Would there have to be some kind of supervisor server program to administrate?
Use some kind of HTML protocol, which seems like overkill. I don't mind the programs having to run on the same machine.
What do people recommend, and where can I start reading?
If you want and need truly OS independent, language independent inter process communication, sockets are probably the best option.
This will allow the two programs to communicate across machines, as well (without code changes).
For reading material, here's a Python Socket Programming How To.
Two possibilities:
Use IP sockets. There are some examples in the Python docs. (Really not that hard if you just use the basic read/write stuff.) On the other hand, sockets in C are generally not that simple to use.
Create a third application. It launches both applications using subprocess and communicates with both applications through pipes. The chess applications must only be able to read/write to stdin/stdout.
This has the additional benefit that this application could check if a move is legal. This helps you finding bugs and keeping the games fair.
You can use Protobuf as the inter-program protocol and read/write from a file each one turns.
You may read the intermediate file every n seconds.
Once you have this working, you may move to use sockets, where each program would start a server and wait for connections.
The change should be small, because the protocol would be protobuf already. So, the only place you have to change is where you either read from a socket or from a file.
In either case you'll need an interchange protocol.
edit
Ooops I misread and I thought it was C++.
Anyway, here's the C support for protobuf but is still work in progress work
http://code.google.com/p/protobuf-c/
I would say just write an xml file that contains moves for black and white. Mark in a separate file who's turn it is and make sure only the program who's turn it is will write to that file to commit their turn.
Here is a link to a proposed xml format for storing your moves that another group came up with
http://www.xml.com/pub/a/2004/08/25/tourist.html
Sockets with a client/server model...
Basically, you and your friend are creating different implementations of the client.
The local client shows a visual representation of the game and stores the state of the pieces (position, killed/not-killed) and the rules about what the pieces can/can't do (which moves can be made with which pieces and whether the board's state is in check).
The remote server stores state about the players (whose turn it is, points earned, whether the game is won or not), and a listing of moves that have occurred.
When you make a move, your client validates the move against the rules of the game, then sends a message to the server that says, I've made this move, your turn.
The other client sees that a turn has been made, pulls the last move from the server, calculates whether where the movement took place, validates the move against the game rules, and replays the action locally. After that's all done, it's now allows the user to make the next move (or not if the game is over).
The most important part of client/server gaming communication is, send as little data to and store as little state as possible on the server. That way you can play it locally, or across the world with little or no latency. As long as your client is running under the same set of rules as your opponent's client everything should work.
If you want to ensure that no one can cheat by hacking their version of the client, you can make the position and rule calculations all be done on the server and just make the clients nothing but simple playback mechanisms.
The reason why sockets are the best communication medium are:
the limitations on cross process communication are almost as difficult as cross node communication
networking is widely supported on all systems
there's little or no barrier-of-entry to using this remotely if you choose
the networking is robust, flexible, and proven
That's part of the reason why many major systems like Databases uses sockets as a networking as-well-as local communication medium.
if both applications running on same computer, use socket and serialize your objects to jsun. otherwise, use web service and jsun or xml. You can find jsun and xml parser in both languages.

Best Python supported server/client protocol?

I'm looking for a good server/client protocol supported in Python for making data requests/file transfers between one server and many clients. Security is also an issue - so secure login would be a plus. I've been looking into XML-RPC, but it looks to be a pretty old (and possibly unused these days?) protocol.
If you are looking to do file transfers, XMLRPC is likely a bad choice. It will require that you encode all of your data as XML (and load it into memory).
"Data requests" and "file transfers" sounds a lot like plain old HTTP to me, but your statement of the problem doesn't make your requirements clear. What kind of information needs to be encoded in the request? Would a URL like "http://yourserver.example.com/service/request?color=yellow&flavor=banana" be good enough?
There are lots of HTTP clients and servers in Python, none of which are especially great, but all of which I'm sure will get the job done for basic file transfers. You can do security the "normal" web way, which is to use HTTPS and passwords, which will probably be sufficient.
If you want two-way communication then HTTP falls down, and a protocol like Twisted's perspective broker (PB) or asynchronous messaging protocol (AMP) might suit you better. These protocols are certainly well-supported by Twisted.
ProtocolBuffers was released by Google as a way of serializing data in a very compact efficient way. They have support for C++, Java and Python. I haven't used it yet, but looking at the source, there seem to be RPC clients and servers for each language.
I personally have used XML-RPC on several projects, and it always did exactly what I was hoping for. I was usually going between C++, Java and Python. I use libxmlrpc in Python often because it's easy to memorize and type interactively, but it is actually much slower than the alternative pyxmlrpc.
PyAMF is mostly for RPC with Flash clients, but it's a compact RPC format worth looking at too.
When you have Python on both ends, I don't believe anything beats Pyro (Python Remote Objects.) Pyro even has a "name server" that lets services announce their availability to a network. Clients use the name server to find the services it needs no matter where they're active at a particular moment. This gives you free redundancy, and the ability to move services from one machine to another without any downtime.
For security, I'd tunnel over SSH, or use TLS or SSL at the connection level. Of course, all these options are essentially the same, they just have various difficulties of setup.
Pyro (Python Remote Objects) is fairly clever if all your server/clients are going to be in Python. I use XMPP alot though since I'm communicating with hosts that are not always Python. XMPP lends itself to being extended fairly easily too.
There is an excellent XMPP library for python called PyXMPP which is reasonably up to date and has no dependancy on Twisted.
I suggest you look at 1. XMLRPC 2. JSONRPC 3. SOAP 4. REST/ATOM
XMLRPC is a valid choice. Don't worry it is too old. That is not a problem. It is so simple that little needed changing since original specification. The pro is that in every programming langauge I know there is a library for a client to be written in. Certainly for python. I made it work with mod_python and had no problem at all.
The big problem with it is its verbosity. For simple values there is a lot of XML overhead. You can gzip it of cause, but then you loose some debugging ability with the tools like Fiddler.
My personal preference is JSONRPC. It has all of the XMLRPC advantages and it is very compact. Further, Javascript clients can "eval" it so no parsing is necessary. Most of them are built for version 1.0 of the standard. I have seen diverse attempts to improve on it, called 1.1 1.2 and 2.0 but they are not built one on top of another and, to my knowledge, are not widely supported yet. 2.0 looks the best, but I would still stick with 1.0 for now (October 2008)
Third candidate would be REST/ATOM. REST is a principle, and ATOM is how you convey bulk of data when it needs to for POST, PUT requests and GET responses.
For a very nice implementation of it, look at GData, Google's API. Real real nice.
SOAP is old, and lots lots of libraries / langauges support it. IT is heeavy and complicated, but if your primary clients are .NET or Java, it might be worth the bother.
Visual Studio would import your WSDL file and create a wrapper and to C# programmer it would look like local assembly indeed.
The nice thing about all this, is that if you architect your solution right, existing libraries for Python would allow you support more then one with almost no overhead. XMLRPC and JSONRPC are especially good match.
Regarding authentication. XMLRPC and JSONRPC don't bother defining one. It is independent thing from the serialization. So you can implement Basic Authentication, Digest Authentication or your own with any of those. I have seen couple of examples of client side Digest Authentication for python, but am yet to see the server based one. If you use Apache, you might not need one, using mod_auth_digest Apache module instead. This depens on the nature of your application
Transport security. It is obvously SSL (HTTPS). I can't currently remember how XMLRPC deals with, but with JSONRPC implementation that I have it is trivial - you merely change http to https in your URLs to JSONRPC and it shall be going over SSL enabled transport.
HTTP seems to suit your requirements and is very well supported in Python.
Twisted is good for serious asynchronous network programming in Python, but it has a steep learning curve, so it might be worth using something simpler unless you know your system will need to handle a lot of concurrency.
To start, I would suggest using urllib for the client and a WSGI service behind Apache for the server. Apache can be set up to deal with HTTPS fairly simply.
SSH can be a good choice for file transfer and remote control, especially if you are concerned with secure login. Most Linux and Solaris servers will already run an SSH service for administration, so if your Python program use ssh then you don't need to open up any additional ports or services on remote machines.
OpenSSH is the standard and portable SSH client and server, and can be used via subprocesses from Python. If you want more flexibility Twisted includes Twisted Conch which is a SSH client and server implementation which provides flexible programmable control of an SSH stack, on both Linux and Windows. I use both in production.
I'd use http and start with understanding what the Python library offers.
Then I'd move onto the more industrial strength Twisted library.
There is no need to use HTTP (indeed, HTTP is not good for RPC in general in some respects), and no need to use a standards-based protocol if you're talking about a python client talking to a python server.
Use a Python-specific RPC library such as Pyro, or what Twisted provides (Twisted.spread).
XMLRPC is very simple to get started with, and at my previous job, we used it extensively for intra-node communication in a distributed system. As long as you keep track of the fact that the None value can't be easily transferred, it's dead easy to work with, and included in Python's standard library.
Run it over https and add a username/password parameter to all calls, and you'll have simple security in place. Not sure about how easy it is to verify server certificate in Python, though.
However, if you are transferring large amounts of data, the coding into XML might become a bottleneck, so using a REST-inspired architecture over https may be as good as xmlrpclib.
Facebook's thrift project may be a good answer. It uses a light-weight protocol to pass object around and allows you to use any language you wish. It may fall-down on security though as I believe there is none.
In the RPC field, Json-RPC will bring a big performance improvement over xml-rpc:
http://json-rpc.org/wiki/python-json-rpc

Categories

Resources