In django-ipware version 2.1 ; old get_real_ip function is deprecated. When I use the new get_client_ip ; my test units are not showing the same results. Means that the two functions do not behave the same.
The following is an original test from django-ipware test unit (not mine)
def test_http_x_forwarded_for_multiple(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182, 10.0.0.0, 127.0.0.1, 198.84.193.157, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip = get_real_ip(request)
self.assertEqual(ip, "198.84.193.157")
The above works fine of course, but I want to ensure that using the new get_client_ip will give the same results (for a system upgrade purposes). But the test is actually failing the assertion:
def test_http_x_forwarded_for_multiple(self):
request = HttpRequest()
request.META = {
'HTTP_X_FORWARDED_FOR': '192.168.255.182, 10.0.0.0, 127.0.0.1, 198.84.193.157, 177.139.233.139',
'HTTP_X_REAL_IP': '177.139.233.132',
'REMOTE_ADDR': '177.139.233.133',
}
ip, is_routable = get_client_ip(request)
self.assertEqual(ip, "198.84.193.157")
resulting:
AssertionError: '177.139.233.132' != '198.84.193.157'
After digging into the code, I found that the new get_client_ip is not iterating inside the meta like get_real_ip . It checks out the left-most ip (or right-most depending on the settings) and skips to the next meta if a Public IP is not found
My question(s) now are:
How can I call get_client_ip in a way that returns the same ip returned by get_real_ip ? What is the logic behind changing the behavior of the function ? Should I trust the new get_client_ip and forget about get_real_ip, or keep using the deprecated get_real_ip and forget about the new get_client_ip ?????
We should forget about the old get_real_ip function and its behavior. See author's reply here: https://github.com/un33k/django-ipware/issues/45#issuecomment-421572304
Related
I am learning how web apps work and after successfully creating connection between front and back end I managed to perform get request with axiom:
Route in my Flask
#app.route('/api/random')
def random_number():
k = kokos()
print(k)
response = {'randomNumber': k}
return jsonify(response)
my kokos() function
def kokos():
return (890)
Function that I call to get data from backend:
getRandomFromBackend () {
const path = `http://localhost:5000/api/random`
axios.get(path)
.then(response => {this.randomNumber = response.data.randomNumber})
.catch(error => {
console.log(error)
})
}
Now suppose I have an input field in my App with value that I want to use in the function kokos() to affect the result and what is going to be displayed in my app.. Can someone explain me how to do that?
Is this what POST requests are for and I have to post first and then get? Or can I use still GET and somehow pass "arguments"? Is this even GET and POST are for or am I making it too complicated for myself?
Is this the proper way to do these kind of thing? I just have a lot of code in python already written and want to simply exchange data between server and client.
Thank you, Jakub
You can add second argument
axios.get(path, {
params: {
id: 122
}
})
.then ...
You can pass id like this or anything it will be available in get params at python side like we pass in URL.
python side [Flask] (http://flask.pocoo.org/docs/1.0/quickstart/#accessing-request-data)
To access parameters submitted in the URL (?key=value) you can use the args attribute:
def random_number():
id = request.args.get('id', '')
k = kokos(id)
id will be passed kokos function if no id is provided it will be blank ''
you can read axios docu to make complex requests.
https://github.com/axios/axios
if any doubt please comment.
I recently have acquired an ACS Linear Actuator (Tolomatic Stepper) that I am attempting to send data to from a Python application. The device itself communicates using Ethernet/IP protocol.
I have installed the library cpppo via pip. When I issue a command
in an attempt to read status of the device, I get None back. Examining the
communication with Wireshark, I see that it appears like it is
proceeding correctly however I notice a response from the device indicating:
Service not supported.
Example of the code I am using to test reading an "Input Assembly":
from cpppo.server.enip import client
HOST = "192.168.1.100"
TAGS = ["#4/100/3"]
with client.connector(host=HOST) as conn:
for index, descr, op, reply, status, value in conn.synchronous(
operations=client.parse_operations(TAGS)):
print(": %20s: %s" % (descr, value))
I am expecting to get a "input assembly" read but it does not appear to be
working that way. I imagine that I am missing something as this is the first
time I have attempted Ethernet/IP communication.
I am not sure how to proceed or what I am missing about Ethernet/IP that may make this work correctly.
clutton -- I'm the author of the cpppo module.
Sorry for the delayed response. We only recently implemented the ability to communicate with simple (non-routing) CIP devices. The ControlLogix/CompactLogix controllers implement an expanded set of EtherNet/IP CIP capability, something that most simple CIP devices do not. Furthermore, they typically also do not implement the *Logix "Read Tag" request; you have to struggle by with the basic "Get Attribute Single/All" requests -- which just return raw, 8-bit data. It is up to you to turn that back into a CIP REAL, INT, DINT, etc.
In order to communicate with your linear actuator, you will need to disable these enhanced encapsulations, and use "Get Attribute Single" requests. This is done by specifying an empty route_path=[] and send_path='', when you parse your operations, and to use cpppo.server.enip.getattr's attribute_operations (instead of cpppo.server.enip.client's parse_operations):
from cpppo.server.enip import client
from cpppo.server.enip.getattr import attribute_operations
HOST = "192.168.1.100"
TAGS = ["#4/100/3"]
with client.connector(host=HOST) as conn:
for index, descr, op, reply, status, value in conn.synchronous(
operations=attribute_operations(
TAGS, route_path=[], send_path='' )):
print(": %20s: %s" % (descr, value))
That should do the trick!
We are in the process of rolling out a major update to the cpppo module, so clone the https://github.com/pjkundert/cpppo.git Git repo, and checkout the feature-list-identity branch, to get early access to much better APIs for accessing raw data from these simple devices, for testing. You'll be able to use cpppo to convert the raw data into CIP REALs, instead of having to do it yourself...
...
With Cpppo >= 3.9.0, you can now use much more powerful cpppo.server.enip.get_attribute 'proxy' and 'proxy_simple' interfaces to routing CIP devices (eg. ControlLogix, Compactlogix), and non-routing "simple" CIP devices (eg. MicroLogix, PowerFlex, etc.):
$ python
>>> from cpppo.server.enip.get_attribute import proxy_simple
>>> product_name, = proxy_simple( '10.0.1.2' ).read( [('#1/1/7','SSTRING')] )
>>> product_name
[u'1756-L61/C LOGIX5561']
If you want regular updates, use cpppo.server.enip.poll:
import logging
import sys
import time
import threading
from cpppo.server.enip import poll
from cpppo.server.enip.get_attribute import proxy_simple as device
params = [('#1/1/1','INT'),('#1/1/7','SSTRING')]
# If you have an A-B PowerFlex, try:
# from cpppo.server.enip.ab import powerflex_750_series as device
# parms = [ "Motor Velocity", "Output Current" ]
hostname = '10.0.1.2'
values = {} # { <parameter>: <value>, ... }
poller = threading.Thread(
target=poll.poll, args=(device,), kwargs={
'address': (hostname, 44818),
'cycle': 1.0,
'timeout': 0.5,
'process': lambda par,val: values.update( { par: val } ),
'params': params,
})
poller.daemon = True
poller.start()
# Monitor the values dict (updated in another Thread)
while True:
while values:
logging.warning( "%16s == %r", *values.popitem() )
time.sleep( .1 )
And, Voila! You now have regularly updating parameter names and values in your 'values' dict. See the examples in cpppo/server/enip/poll_example*.py for further details, such as how to report failures, control exponential back-off of connection retries, etc.
Version 3.9.5 has recently been released, which has support for writing to CIP Tags and Attributes, using the cpppo.server.enip.get_attribute proxy and proxy_simple APIs. See cpppo/server/enip/poll_example_many_with_write.py
hope this is obvious, but accessing HOST = "192.168.1.100" will only be possible from a system located on the subnet 192.168.1.*
I'm creating unit tests for my views using Django's built-in Test Client to create mock requests.
The view I'm calling should create an object in the database. However, when I query the database from within the test method the object isn't there - it either hasn't been created or has been discarded on returning from the view.
Here's the view:
def apply_to_cmp(request, campaign_id):
""" Creates a new Application to 'campaign_id' for request.user """
campaign = Campaign.objects.get(pk = campaign_id)
if not Application.objects\
.filter(campaign = campaign, user = request.user)\
.exists():
application = Application(**{'campaign' : campaign,
'user' : request.user})
application.save()
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
This is the test that calls it:
def test_create_campaign_app(self):
""" Calls the method apply_to_cmp in .views """
c = Client()
c.login(username = self.username, password = self.password)
url = '/campaign/' + self.campaign.id + '/apply/'
response = c.get(url)
# Check whether request was successful (should return 302: redirect)
self.assertEqual(response.status_code, 302)
# Verify that an Application object was created
app_count = Application.objects\
.filter(user = self.user, campaign = self.campaign)\
.count()
self.assertEqual(app_count, 1)
This is the output from the running the test:
Traceback (most recent call last):
File "/test_views.py", line 40, in test_create_campaign_app
self.assertEqual(app_count, 1)
AssertionError: 0 != 1
The method apply_to_cmp is definitely being called, since response.status_code == 302, but still the Application object is not created. What am I doing wrong?
Edit: Solution
Client.login failed because the login system was not properly initialised in the setUp method. I fixed this by calling call_command('loaddata', 'initial_data.json') with initial_data.json containing the setup for the login system. Also, HttpResponseRedirect(request.META.get('HTTP_REFERER')) didn't work for obvious reasons. I changed that bit to
if request.META.get('HTTP_REFERER'):
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
return HttpResponse()
And therefore the test to
self.assertEqual(response.status_code, 200)
Thanks for your help!
Nothing stands out as particularly wrong with your code - but clearly either your test case or the code your are testing is not working the way you think. It is now time to question your assumptions.
The method apply_to_cmp is definitely being called, since response.status_code == 302
This is your first assumption, and it may not be correct. You might get a better picture of what is happening if you examine other details in the response object. For example, check the response.redirect_chain and confirm that it actually redirects where you expect it to:
response = c.get(url, follow=True)
self.assertEqual(response.redirect_chain, [<expected output here>])
What about other details? I can't see where self.username and self.password are defined from the code you provided. Are you 100% sure that your test code to login worked? c.login() returns 'True' or 'False' to indicate if the login was successful. In my test cases, I like to confirm that the login succeeds.
login_success = c.login(username = self.username, password = self.password)
self.assertTrue(login_success)
You can also be a bit more general. You find nothing if you check Application.objects.filter(user=self.user, campaign=self.campaign), but what about checking Application.objects.all()? You know that a specific item isn't in your database, but do you know what is stored in the database (if anything at all) in the test code at that time? Do you expect other items in there? Check to confirm that what you expect is true.
I think you can solve this one, but you'll need to be a bit more aggressive in your analysis of your test case, rather than just seeing that your app_count variable doesn't equal 1. Examine your response object, put in some debug statements, and question every assumption.
First of all, if you are subclassing from django.test.TestCase, please take in consideration the fact that each test is wrapped into transactions (official docs).
Then, you can add db logging to your project to see whether there was a hit to the database or not (official docs).
And finally be sure that you're using correct lookups at this line: filter(user = self.user, campaign = self.campaign)
I'm writing a thrift service in python and I would like to understand how I can
get the client's IP address in the handler functions context.
Thanks,
Love.
You need to obtain the transport, and get the data from there. Not sure how to do this exactly in Python, but there's a mailing list thread and there's this JIRA-ticket THRIFT-1053 describing a solution for C++/Java.
This is the relevant part from the mailing list thread:
I did it by decorating the TProcessor like this psuedo-code.
-craig
class TrackingProcessor implements TProcessor {
TrackingProcessor (TProcessor processor) {this.processor=processor;}
public boolean process(TProtocol in, TProtocol out) throws TException {
TTransport t = in.getTransport();
InetAddress ia = t instanceof TSocket ?
((TSocket)t).getSocket().getInetAddress() : null;
// Now you have the IP address, so what ever you want.
// Delegate to the processor we are decorating.
return processor.process(in,out);
}
}
This is a bit old but I'm currently solving the same problem.
Here's my solution with thriftpy:
import thriftpy
from thriftpy.thrift import TProcessor, TApplicationException, TType
from thriftpy.server import TThreadedServer
from thriftpy.protocol import TBinaryProtocolFactory
from thriftpy.transport import TBufferedTransportFactory, TServerSocket
class CustomTProcessor(TProcessor):
def process_in(self, iprot):
api, type, seqid = iprot.read_message_begin()
if api not in self._service.thrift_services:
iprot.skip(TType.STRUCT)
iprot.read_message_end()
return api, seqid, TApplicationException(TApplicationException.UNKNOWN_METHOD), None # noqa
args = getattr(self._service, api + "_args")()
args.read(iprot)
iprot.read_message_end()
result = getattr(self._service, api + "_result")()
# convert kwargs to args
api_args = [args.thrift_spec[k][1] for k in sorted(args.thrift_spec)]
# get client IP address
client_ip, client_port = iprot.trans.sock.getpeername()
def call():
f = getattr(self._handler, api)
return f(*(args.__dict__[k] for k in api_args), client_ip=client_ip)
return api, seqid, result, call
class PingPongDispatcher:
def ping(self, param1, param2, client_ip):
return "pong %s" % client_ip
pingpong_thrift = thriftpy.load("pingpong.thrift")
processor = CustomTProcessor(pingpong_thrift.PingService, PingPongDispatcher())
server_socket = TServerSocket(host="127.0.0.1", port=12345, client_timeout=10000)
server = TThreadedServer(processor,
server_socket,
iprot_factory=TBinaryProtocolFactory(),
itrans_factory=TBufferedTransportFactory())
server.serve()
Remember that every method in the dispatcher will be called with extra parameter client_ip
The only way I found to get the TProtocol at the service handler is to extend the processor and create one handler instance for each client related by transport/protocol. Here's an example:
public class MyProcessor implements TProcessor {
// Maps sockets to processors
private static final Map<, Processor<ServiceHandler>> PROCESSORS = Collections.synchronizedMap(new HashMap<String, Service.Processor<ServiceHandler>>());
// Maps sockets to handlers
private static final Map<String, ServiceHandler> HANDLERS = Collections.synchronizedMap(new HashMap<String, ServiceHandler>());
#Override
public boolean process(final TProtocol in, final TProtocol out)
throws TException {
// Get the socket for this request
final TTransport t = in.getTransport();
// Note that this cast will fail if the transport is not a socket, so you might want to add some checking.
final TSocket socket = (TSocket) t;
// Get existing processor for this socket if any
Processor<ServiceHandler> processor = PROCESSORS.get(socket);
// If there's no processor, create a processor and a handler for
// this client and link them to this new socket
if (processor == null) {
// Inform the handler of its socket
final ServiceHandler handler = new ServiceHandler(socket);
processor = new Processor<ServiceHandler>(handler);
PROCESSORS.put(clientRemote, processor);
HANDLERS.put(clientRemote, handler);
}
return processor.process(in, out);
}
}
Then you need to tell Thrift to use this processor for incoming requests. For a TThreadPoolServer it goes like this:
final TThreadPoolServer.Args args = new TThreadPoolServer.Args(new TServerSocket(port));
args.processor(new MyProcessor());
final TThreadPoolServer server = new TThreadPoolServer(args);
The PROCESSORS map might look superfluous, but it is not since there's no way to get the handler for a processor (i.e. there's no getter).
Note that it is your ServiceHandler instance that needs to keep which socket it is associated to. Here I pass it on the constructor but any way will do. Then when the ServiceHandler's IFace implementation is called, it will already have the associated Socket.
This also means you will have an instance of MyProcessor and ServiceHandler for each connected client, which I think is not the case with base Thrift where only one instance of each of the classes are created.
This solution also has a quite annoying drawback: you need to figure out a method to remove obsolete data from PROCESSORS and HANDLERS maps, otherwise these maps will grow indefinitely. In my case each client has a unique ID, so I can check if there are obsolete sockets for this client and remove them from the maps.
PS: the Thrift guys need to figure out a way to let the service handler get the used protocol for current call (for example by allowing to extend a base class instead of implementing an interface). This is very useful in so many scenarios.
I'm using python-raven to automatically capture all 500 in my django project, and it works great. I'm also forwarding some exceptions that I handle and append them with a special tag to be able to filter them out. The problem is that I can not filter for messages that is missing a particular tag so I'd like to set a default tag for all messages, but can't get it to work.
I've tried the following but it's just ignored.
RAVEN_CONFIG = {
'dsn': 'udp://x:y#z:q/w',
'tags': {'testtag': 'value'},
}
Does anyone know how to send a default tag to sentry?
The following works for me in the settings module.
It adds the environment variable DJANGO_ENVDIR to every message being sent.
from raven.contrib.django.client import DjangoClient as RavenDjangoClient
class SentryDjangoClient(RavenDjangoClient):
def build_msg(self, *args, **kwargs):
data = super(RavenDjangoClient, self).build_msg(*args, **kwargs)
data['tags']['ENVDIR'] = os.environ.get('DJANGO_ENVDIR', 'unset')
return data
if get_env_var('SENTRY_DSN', False):
RAVEN_CONFIG = {
'dsn': get_env_var('SENTRY_DSN'),
# NOTE: timeout set via DSN
}
SENTRY_CLIENT = 'project.settings.base.SentryDjangoClient'
You need to adjust the SENTRY_CLIENT setting, according to where you put the SentryDjangoClient class, which extends the build_msg method.
You could assign a different name to the logger, before logging the message. That would allow you to filter by the "logger" in the Sentry server.
After looking into it a bit more it seems like this is not at all possible ATM.