I am trying to mock-up an API and am using separate apps within Django to represent different web services. I would like App A to take in a link that corresponds to App B and parse the json response.
Is there a way to dynamically construct the url to App B so that I can test the code in development and not change to much before going into production? The problem is that I can't use localhost as part of a link.
I am currently using urllib, but eventually I would like to do something less hacky and better fitting with the web services REST paradigm.
You could do something like
if settings.DEBUG:
other = "localhost"
else:
other = "somehost"
and use other to build the external URL. Generally you code in DEBUG mode and deploy in non-DEBUG mode. settings.DEBUG is a 'standard' Django thing.
By "separate apps within Django" do you mean separate applications with a common settings? That is to say, two applications within the same Django site (or project)?
If so, the {% url %} tag will generate a proper absolute URL to any of the apps listed in the settings file.
If there are separate Django servers with separate settings, you have the standard internet problem of URI design. Your URI's can be consistent with only the hostname changing.
- http://localhost/some/path - development
- http://123.45.67.78/some/path - someone's laptop who's running a server for testing
- http://qa.mysite.com/some/path - QA
- http://www.mysite.com/some/path - production
You never need to provide the host information, so all of your links are <A HREF="/some/path/">.
This, generally, works out the best. You have can someone's random laptop being a test server; you can get the IP address using ifconfig.
Related
My GCP app has been abused by some users. To stop their usage I have attempted to eliminate features that can be abused, and have employed firewall rules to block certain users. But bad users continue to try to access my app via certain legacy URLs such as myapp.appspot.com/badroute. Of course, I still want users to use the default URL myapp.appspot.com .
I have altered my code in the following manner, but I am still getting Instances to start from them, and I do not want Instances in such cases. What can I do differently to avoid the bad Instances starting OR is there anything I can do to force such Instances to stop quickly instead of after about 15 minutes?
class Dummy(webapp2.RequestHandler):
def get(self):
logging.info("Dummy: " )
self.redirect("/")
app = webapp2.WSGIApplication(
[('/', MainPage),
('/badroute', Dummy)],debug=True)
(I may be referring to Instances when I should be referring to Requests.)
So whats the objective? you want users that visit /badroute to be redirected to some /goodroute ? or you want /badroute to not hit GAE and incur cost?
Putting a google cloud load balancer in front could help.
For the first case you could setup a redirect rule (although you can do this directly within App Engine too, like you did in your code example).
If you just want it to not hit app engine you could setup google cloud load balancer to have the /badroute route to some file in a GCS bucket instead of your GAE service
https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets
However you wouldnt be able to use your *.appsot.com base url. You'd get a static IP which you should then map a custom domain to it
DISCLAIMER: I'm not 100% sure if this would work.
Create a new service dummy.
Create and deploy a dispatch.yaml (GAE Standard // GAE Flex)
Add the links you want to block to the dispatch.yaml and point them to the dummy service.
Set up the Identity Aware Proxy (IAP) and enable it for the dummy service.
???
Profit
The idea is that the IAP will block the requests before they hit the dummy service. Since the requests never actually get forwarded to the service dummy you will not have an instance start. The bots will get a nice 403 page from Google's own infrastructure instead.
EDIT: Be sure to create the dummy service with 0 instances as the idea is to not have it cost money.
EDIT2:
So let me expand a bit on this answer.
You can have multiple GAE services running within one GCP project. Each service is it's own app. You can have one service running a python Flask app and another running a Java Springboot app. You can have each be either GAE Standard or GAE Flex. See this doc.
Normally all traffic gets routed to the default service. Using dispatch.yaml you can make request to certain endpoints go to a specific service.
If you create the dummy service as a GAE Standard app, and you don't actually need it to do anything, you can then route all the endpoints that get abused to this dummy service using the dispatch.yaml. Using GAE Standard you can have the service use 0 instances (and 0 costs).
Using the IAP you can then make sure only your own Google account can access this app (which you won't do). In effect this means that the abusers cannot really access the service, as the IAP blocks it before hitting the service, as you've set it up so only your Google account can access it.
Note, the dispatch.yaml is separate from any services, it's one of the per-project configuration files for GAE. It's not tied to a specific service.
As stated, the dummy app doesn't actually need to do anything, but you need to deploy it once though, as this basically creates the service.
Consider using cloudflare to mitigate bot abuse, customize firewall rules regarding route access, rate limit ips, etc. This can be combined with Google cloud load balancer, if you’d like—as mentioned in https://stackoverflow.com/a/69165767/806876.
References
Cloudflare GCP integration: https://www.cloudflare.com/integrations/google-cloud/
There is a little information I did not provide in my question about my app.yaml:
handlers:
- url: /.*
script: mainapp.app
By simply removing .* from the url specification, no Instance start is created. The user gets Error: Not Found, though. So that satisfies my needs.
Edo Akse's Answer pushed me to this answer by reading here, so I am accepting his answer. I am still not clear how to implement Edo's Answer, though.
We have created an app for a production facility that is very simple using django and python. But throughout prototyping we used Runserver command and localhost. The problem is this: We want to deploy the app without using localhost and the command line every time. The people using it wont be able to do this. It will be on implemented on one computer so it shouldnt be that challenging. The app pulls data from one database and stores data in another. It would be nice to have our own URL. Do we need to do it through wsgi? Apache? I know the problem is simple but there seem to be so many ways to deploy and many of them are overcomplicated for our needs.
Follow up question: I read that it just using Localhost isnt the best for this type of thing. Is this true?
Any help would be great
It sounds like you want to deploy the app live. So, I'd recommend using a dynamic hosting service like AWS/Azure/Firebase etc. If you want your own URL, purchase a domain, and in the configuration for the domain set up a CNAME file as well so you can redirect your domain to the live instance on the cloud.
Local host is better used for testing, and making changes without affecting the client and then you deploy/push to the cloud instance for production.
I have a simple portfolio website with some html and css files in the root directory of the site hosted by Dreamhost. I also have a Django app that I'd like to place in a subdomain of this same website. However, Heroku will be serving the django app. I'm confused about how to organize and configure the whole portfolio/django website. How would the system work using two different hosts? Should I integrate the static portfolio site into the django project? Or do I keep them completely seperate and have them live on their own servers? Sorry if my question doesn't make sense. I'm very confused.
As far as the internet's concerned, a subdomain is a completely separate website. You can point a subdomain at whatever address you like; the internet doesn't care that it's a completely separate host. You can host your system however you like: both on Dreamhost, both on Heroku, or one on each. The latter setup is the most complex, so we'll walk through that one here.
Let's say your site is example.com and you want the portfolio site to be portfolio.example.com. If your app's running on Heroku, it'll have a name similar to yourportfolio.herokuapp.com. So we need to do two things: tell Heroku that your app is served from portfolio.example.com, and tell the DNS system to point from your subdomain from Heroku.
Pointing the subdomain to Heroku
Presuming your domain name is hosted on Dreamhost, go to the Domains section of the control panel, then Manage Domains. Under example.com is a link called DNS. You need to add a custom CNAME record; set name to portfolio, type to CNAME, and value to yourportfolio.herokuapp.com.. CNAMEs are a way of setting up aliases on the web; they mean "this site is also known as foo".
Telling Heroku to serve your app
Within your Heroku project, run heroku domains:add portfolio.example.com.
Heroku has documentation about subdomains here, which is a useful overview of the process as well as giving details of more complex setups.
I have a heroku app using python and flask. It currently serves a whole domain and all endpoints.
http://*.domain.com/* -> one heroku app
I like to explore different languages and frameworks, and want to rewrite different sections of the website. Is that possible?
It would work out to something like
http://www.domain.com/python-stuff (a python/flask app)
http://www.domain.com/ruby-stuff (a ruby/sinatra app)
http://www.domain.com/java-play-stuff (a java/playframework app)
All I can see is possibly having one app that handles www and all subdirs, and redirects to a different subdomain instead.
http://www.domain.com/ruby-stuff -> http://ruby-stuff.domain.com/ruby-stuff
http://www.domain.com/java-play-stuff -> http://java-play-stuff.domain.com/java-play-stuff
http://www.domain.com/{{ everything else }} -> the original python flask app
I don't want to do this because then I'd have to restructure all of my openid users to point to www.domain.com for their seed url explicitly instead of relying that all logins are coming from the same subdomain. (Among other reasons like cookies (well that's related to open id also)
Thoughts?
Setup an Amazon CloudFront distribution and have it map specific paths to different origin servers.
What are the best practices and solutions for managing dynamic subdomains in different technologies and frameworks? I am searching for something to implement in my Django project but those solutions that I saw, don't work. I also tried to use Apache rewrite mod to send requests from subdomain.domain.com to domain.com/subdomain but couldn't realize how to do it with Django.
UPDATE: What I need is to create virtual subdomains for my main domain using usernames from the site. So, if I have a new registered user that is called jack, when I go to jack.domain.com, it would operate make some operations. Like if I just went to domain.com/users/jack. But I don't want to create an actual subdomain for each user.
You may be able to do what you need with apache mod_rewrite.
Obviously I didn't read the question clearly enough.
As for how to do it in django: you could have some middleware that looks at the server name, and redirects according to that (or even sets a variable). You can't do it with the bare url routing system, as that only has path information, not hostname info.