I have built an attendance system for a company based on the Bottle web framework. Working perfectly fine, but it is not allowing multiple sessions. i.e, I want to access the same local host on Firefox, Microsoft Edge, and google chrome at the same time. Can anyone help me in this regard?
I have tried this code for the beaker, but unable to solve my problem
from beaker.middleware import SessionMiddleware
session_opts =
{
'session.type': 'file',
'session.cookie_expires': 300,
'session.data_dir': './data',
'session.auto': True
}
Related
I have a public cloud VM which has public IP say 160.159.158.157 (this has a domain registered against it).
I have a Django application (backend) which is CORS-enabled and serves data through port 8080.
I have a React app running on the same VIM on a different port (3000), which is accessing the Django app and is supposed to produce a report.
The problem is that, when I use http://<domain-name>:8080/api/ or http://<public-ip>:8080/api/, my application is working fine,
but when I try to fetch data from localhost like http://localhost:8080/api/ or http://127.0.0.1:8080/api/, the React app fails to fetch data with the following error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://127.0.0.1:8080/api/. (Reason: CORS request did not succeed). Status code: (null).
Here's what I've tried:
axios.get(baseURL, { headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods':'GET,PUT,POST,DELETE,PATCH,OPTIONS',
}
but it didn't work. What should I do?
Looks like you've gotten confused where those headers need to be. You're setting them on the request to the backend, but they need to be set on the response from the backend. After all, it wouldn't be very good security if someone making a request could simply say "yeah, I'm ok, you should trust me".
The problem is going to be somewhere in your backend Django app, so double check the CORS config over there.
I managed to get it working locally (different cluster, separate settings.py), but not after when deployed to Heroku.
Heroku - automatically adds DATABASE_URL config var with a postgresql, and I cannot remove/edit it.
MongoDB Atlas - I've set the MongoDB Atlas cluster to allow IPs from everywhere. And the password has no funny characters.
django production settings.py
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'DBProd',
'CLIENT': {
'host': "mongodb+srv://XXX:YYY#ZZZ.pc4rx.mongodb.net/DBProd?retryWrites=true&w=majority",
}
}
}
I ran migrate straight after the deployment and it's all green OKs
heroku run python manage.py migrate
Everything works functional wise, just that the data are not stored in the MongoDB Atlas cluster. There are lots of posts from various sites on this, but they all have different instructions... Some of the posts I tried to follow:
https://developer.mongodb.com/how-to/use-atlas-on-heroku/
Django + Heroku + MongoDB Atlas (Djongo) = DatabaseError with No Exception
Connecting Heroku App to Atlas MongoDB Cloud service
-- A very confused beginner
For whoever lands on this. Add 'authMechanism': 'SCRAM-SHA-1', below 'host' will fix the issue.
This happened to me as well, in my case, somehow Heroku added postgresql as an add-on for my app so have to delete that first.
Check Heroku documentation about postgresql and how to destroy add-on:
https://devcenter.heroku.com/changelog-items/438
https://devcenter.heroku.com/articles/managing-add-ons
Then you need to configure MongoDB as your database by adding a MONGODB_URI config var with your database url. You should be ready to connect to your MongoDB now!
The following doc also helps:
https://www.mongodb.com/developer/how-to/use-atlas-on-heroku/
I've been having the same issue. Everything works fine locally. The problem is when deploying on Heroku. I have added 'authMechanism': 'SCRAM-SHA-1' and I have also configured MongoDB as my database by adding a MONGODB_URI config var. Heroku still autoconfigures DATABASE_URL config var with a postgresql, and I cannot remove/edit it.
In my Javascript code I used fetch('127.0.0.1:8000/<something>') which I specified in my urls.py and that way views.py read data from pymongo and returned it as a response.
After deploying my app to Heroku (and hardcoding the 127.0.0.1:8000 to <myappname>.heroku.com) the same fetch method seems to return [] instead of json containing values from MongoDB.
This is the most identical issue I found in a post I hope I'm not out of subject.
I'm writing code in Python that has the task to suspend user and transfer his data to other user.
Unfortunatelly I discovered that that functionality doesn't work.
Acording to documentation of API
https://developers.google.com/admin-sdk/data-transfer/v1/reference/transfers/insert
I created in code object which looks like this in python:
{
'oldOwnerUserId': 'XXXXXXXXXXXXXXXXXXXXX',
'newOwnerUserId': 'YYYYYYYYYYYYYYYYYYYYY',
'applicationDataTransfers':
[
{'applicationId': 'UUUUUUU'}
]
}
Where:
"XXXXXXXXXXXXXXXXXXXXX" is old user ID
"YYYYYYYYYYYYYYYYYYYYY" is new user ID
"UUUUUUU" is my Google drive Application ID
then I'm running code:
transfer_service.transfers().insert(body=transfer_data).execute()
While running the script there are no errors. After few seconds I recive email that "Data transfer successful" but when I look into new created directory on Drive I see that it is empty. I tested this several times with the same result.
I'm sure that it isn't problem with:
Credidentials - I'm able to get ID's of users and ID of google drive
Users and drive ID's - I have checked them via web panel in above link
I tried to do the same thing via https://admin.google.com and result is the same.
What cause that problem?
After contacting Google they told me to add transfer parameters described on website:
https://developers.google.com/admin-sdk/data-transfer/v1/parameters
working object now looks like this:
{
'oldOwnerUserId': 'XXXXXXXXXXXXXXXXXXXXX',
'newOwnerUserId': 'YYYYYYYYYYYYYYYYYYYYY',
'applicationDataTransfers':
[
{
'applicationId': 'UUUUUUU'
'applicationTransferParams':
[
{
'key': 'PRIVACY_LEVEL',
'value': ['PRIVATE', 'SHARED']
}
]
}
]
}
Acording to my conversation w Google without applicationTransferParams script works but It doesn't know what type of data needs to transfer.
I am running a cherrypy application using python 2.7 on windows (and using the cherrypy version from pipi). The application is running in the intranet and basically structured like the code below.
To monitor this application with newrelic I tried to wrap it like explained in the newrelic documentation. But it did not appear in the newrelic backend when started that way, although the cherrypy application worked.
I also tried the manual method, inserting the newrelic agent just one line after def main():. This made the application appear in the newrelich backend but it monitored nothing. All diagrams empty.
I already searched the web for hours and asked some colleages without any progress.
From the newrelic documentation I suspect I have to choose a different structure or technology in my cherrypy application. They do not use quickstart. So my question is how do I convert my application that it fits the newrelic way of monitoring applications.
This is more or less the main file of the application:
# -*- coding: utf-8 -*-
def main():
import cherrypy
from auth import AuthController
from my_routes import RouteOne, RouteTwo
dispatcher = cherrypy.dispatch.RoutesDispatcher()
dispatcher.explicit = False
dc = dispatcher.connect
dc(u'd_home', u'/', RouteOne().index_home)
dc(u'd_content', u'/content/', RouteOne().index_content)
dc(u'd_search', u'/search/:find', RouteRoot().index_search)
conf = {
'/' : {
u'request.dispatch' : dispatcher,
u'tools.staticdir.root' : 'c:/app/src',
u'tools.sessions.on' : True,
u'tools.auth.on': True,
u'tools.sessions.storage_type' : "file",
u'tools.sessions.storage_path' : 'c:/app/sessions',
u'tools.sessions.timeout' : 60,
u'log.screen' : False,
u'log.error_file' : 'c:/app/log/error.txt',
u'log.access_file' : 'c:/app/log/access.txt',
},
u'/app/public' : {
u'tools.staticdir.debug' : True,
u'tools.staticdir.on' : True,
u'tools.staticdir.dir' : u"public",
},
}
# ... some additional initialisation left out ...
cherrypy.tree.mount(None, u"/", config=conf)
cherrypy.config.update({
'server.socket_host': myhost.test.com,
'server.socket_port': 8080,})
from auth import check_auth
cherrypy.tools.auth = cherrypy.Tool('before_handler', check_auth)
cherrypy.quickstart(None, config=conf)
if __name__ == "__main__":
main()
Please help me to construct in a newrelic compatible way, like wsgi, the different parts, like the config, the dispatch, the auth and the routes, so that I can monitor it.
I am ready to do things different where necessary, I know with python almost everything is possible.
So if this needs to be a wsgi application how do I change it? I would prefer this over other methods (like paste).
I hope this can also help many other people, because I was not able to find anything specific about this and I can imagine that many cherrypy applications out there are structured similar. I spent a lot of time in the cherrypy docs but somehow was not able to put the different parts together.
The newrelic-admin wrapper script can be used for a CherryPy WSGI application which uses cherrypy.quickstart(). Once you have generated your agent configuration file, all you need to do is run:
NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-python app.py
where app.py is your script.
A simple example of an app.py script which works, including a route dispatcher is:
import cherrypy
class EndPoint(object):
def index(self):
return 'INDEX RESPONSE'
dispatcher = cherrypy.dispatch.RoutesDispatcher()
dispatcher.connect(action='index', name='endpoint', route='/endpoint',
controller=EndPoint())
conf = { '/': { 'request.dispatch': dispatcher } }
cherrypy.quickstart(None, config=conf)
You might verify that things are working for your specific environment and package versions with that example.
I'm working with some friends to build a PostgreSQL/SQLAlchemy Python app and have the following line:
engine = create_engine('postgresql+pg8000://oldmba#localhost/helloworld')
Newbie question: Instead of having to edit in "oldmba" (my username) all the time whenever I git pull someone else's code, what's the simple way to make that line equally applicable to all users so we don't have to constantly edit it? Thanks in advance!
have a config file with your settings.
It can store data in python config dictionary or variables
The config file can import from a local_settings.py file. This file can be ignored in your gitignore. It can contain your individdual settings , username , password, database urls, pretty much anything that you need to configure and that may differ depending on your enviornment (production vs devel)
This is how settings in django projects are usually handled. It allows for multiple users to devlop on the same project with different settings. You might want a 'database_url' field or something too so on production if you need to set your database to a different server but on development you use 'localhost'
# config.py
database = {
'username': 'production_username',
'password': 'production_password'
}
try:
from local_config import *
catch ImportError:
pass
# local_config.py
database = {
'username': 'your_username',
'password': 'your_password'
}
from config import *
engine = create_engine('postgresql+pg8000://{0}#localhost/helloworld'.format(database['username']))