I'm using the New Relic addon on a Django/Python Heroku app and I would like to log deployments, but I can't figure out how to do it.
Heroku offers an HTTP POST deploy hook, but it seems to be too restrictive to match the requirements for the New Relic REST API: it requires a x-api-key header and the parameter names don't match (see here for details).
I haven't been able to find any information about this anywhere. Am I missing something? Is there another way to do this?
Thanks.
This should happen automatically, but NewRelic deployment tracking integration with Heroku has been broken since approx Nov 1st.
I have a support ticket open on this issue and it should be fixed sometime in the next week or so.
EDIT (11/23/2013):
Heroku acknowledged this is a bug caused by an overhaul of the NewRelic addon. Here's what they said as root cause on my support ticket:
I've got an update on this, but no resolution yet. To give you some context (given you've asked how this has happened 3 times) New
Relic were the very first add-on in the Marketplace and as a result
there has been a lot of gnarly code very specific to their
implementation. On their side they've also had to . And as you've
gathered unfortunately much of it was not well tested. We've been
working with New Relic all year to finally fix that, and we've moved
them across to the standard API that all other add-ons and most PaaS
providers now adhere to. Any new customers since May have been on that
new integration so we've been testing it out for 6 months. The final
part of that process was to remove customers on the legacy integration
and that occurred as part of the migration onto the new pricing we
announced at the start of this month.
It's only after this migration we realized there was no support for deploy notifications.
New customers since may had never been exposed to the feature so
didn't notice it was missing, and it appears none of the legacy
customers we tested with in October noticed it was missing either. To
rectify the situation we've had to try and build this feature out in
the Add-ons API. That's been documented and deployed, and we're now
working with New Relic to help their engineers implement it as soon as
they possibly can.
I don't think you can view my support ticket, but you're welcome to reference it with Heroku if you file your own ticket:
https://help.heroku.com/tickets/102722
EDIT (01/06/2014):
NewRelic/Heroku appear to have fixed their integration so that deploys are now being tracked successfully. This appears to have gone into effect sometime on/before 1/2/2014.
Related
Experienced with other cloud hosting providers I tried Digital Ocean the first time to set up a Wagtail app (should be a staging/production environment with pipeline in the future). Following this tutorial (but just deploying a SQLite database not a solid one), everything works fine. The app will be understand as a Python app when cloning it from GitHub and the default build process (by Python Buildpack) followed by running with Gunicorn server will be executed like expacted – there is no Dockerfile provided. Afterwards the frontend works as expected when first opening it. The admin panel allows to enter but when navigating to page editing it destroys the session and I´m faced with the login panel – probably auto logout, since expired session. The django-admin reacts the same way.
The tutorial uses get_random_secret_key. Maybe this is not accepted by Digital Ocean? Another maybe important information is that the set-cookie header first contains a expiry date in one year (like it is set). But after the session was destroyed it´s set to 1970 (probably something like a null value). Actually this is just the indicator for the forced ended session I guess.
Since it´s not so easy to find out, if it´s something which has to do with the code or safety measurements I didn´t share code. But I can do that of course, if it´s needed. It´s probably a issue not just for me and a hiint which is the cause could help other developers struggeling with this, too.
I'm tasked with creating a web app (I think?) for my job that will tracker something in our system. It'll be an internal tool that staff uses to keep track of the status of one of the things we do. It should look like trello, with cards that drag from step to step. That frontend exists, but my job is to make the system update when the cards are dragged. This requires using an API in Python and isn't that complicated to grab from/update. I have no idea how to put all of this together. My job is almost completely nontechnical and there's no one internally who knows what I'm doing except for me. I'm in so over my head here and have no idea where to begin. Is this something I should deploy on Elastic Beanstalk? EC2? How do I tie this together and put it somewhere?
Are you trying to pull in live data from Trello or from your companies own internal project management tool?
An EC2 might be useful, but honestly, it may be completely unnecessary if your company has its own servers. An EC2 is basically just a collection of rental computers to help with scaling. I have never used beanstalk so my input would be useless there.
From what I can assume from the question, you could have a python script running to pull from the API and make the changes without an EC2.
First thing you should do is gather as much information about what the end product should look like. From your question, I have the feeling that you have only a vague idea of what the stakeholders want. Don't be afraid to ask more clarification about an unclear task. It's better to spend 30 minutes discussing and taking note than to show the end-product after a month and realizing that's not what your boss/team wanted.
Question I would Ask
Who is going to be using this app? (technical or non-technical person)
For what purpose is this being developed?
Does it need to be on the web or can it be used locally?
How many users need to have access to this application?
Are we handling sensitive information with this application?
Will this need to be augmented with other functionality at some point?
This is just a sample of what I would ask, during the conversation with the stakeholder a lot more will pop up for sure.
What I think you have to do
You need to make a monitoring system for the tasks that need to be done by your development team (like a Kanban)
What I think you already have
A frontend with the card that are draggable to each bin. I also assume that you can create a new card and delete one in the frontend. The frontend is most likely written in React, Angular or Vue.js. You might also have no frontend framework (a mix of jQuery and vanilla js), but usually frontend developper end up picking a framework of sort to help the development.
A backend API in Python (in Flask or with Django-rest-framework most likely) that is communicating with a SQL database like postgresql or a Document database like MongoDB.
I'm making a lot of assumption here, but your aim should be to understand the technology you will be working with in order to check which hosting would work best. For instance, if the database that is setup is a MySQL database you might have some trouble with some hosting provider.
What I think you are missing
Currently the frontend and the backend don't communicate to each other. When you drag a card it won't persist if you refresh the page. Also, all of this is sitting in your computer and cannot be used by any one from your staff. You need to first connect the frontend with the backend so that the application has persistance. Then you need to deploy this application somewhere so that it is reachable by your staff.
What I would do is first work locally to make sure that the layer of persistance is working. This imply having the API server, the frontend server and the database server running simultaneously on your computer to develop. You should then fetch data from the API to know which cards are there in the database and then create them visually in your frontend at the right spot.
When you drop a card to a new spot after having dragging it should trigger a POST request to your API server in order to update the status of this particular card (look at the documentation of your API to check what you need to send).
The server should be sending back an updated version of the cards status if the POST request was sucessful, so your application should then just redraw the card at the right spot (it won't make a difference for you since they are already at the right spot and your frontend framework will most likely won't act on this change since the state hasn't changed). That's all I would do for that part.
I would then move to the deployment phase to make sure that whatever you did locally can still work online. I would use Heroku to start instead of jumping directly to AWS. Heroku is a service built on top of AWS which manage a lot of the complexity of AWS for you. This is great for prototyping and it means that when your stuff is ready you can migrate to AWS easily and be confident that a setup exist to make your app work. You might also be tied up to your company servers, which is another thing I would ask to the stakeholder (i.e. where can I put this application and where I can't put it).
The flow for a frontend + api + database application on Heroku is usually as follow. You create a github repo for your frontend (make it private) and you create an app on Heroku that will watch this repository for changes. It will re-deploy the application for you when it sees a change at a specific subdomain of Heroku hosting. You will need to configure some procfiles that will tell Heroku what to do with a given application type. This is where you need to double check what frontend you are using since that might change the procfiles used. It's most likely a node.js based frontend (React, Angular or Vue) so head over here for the documentation of how to put that online.
You will need to make a repo for the backend also that is separate from the frontend, these two entities are distinct and they only communicate through HTTP request (frontend->backend) and JSON (backend->frontend). You will need to follow the same idea as with the frontend to deploy, head over here.
Once you have these two online, you need to create a database on Heroku. This is done by adding a datastore to your api, head over here. There are some framework specific configuration you need to do to make the API talk to an online database, but then you will need to find that configuration on the framework documentation. The database could also be already up and living on your server, if this is the case you just need to configure your online backend to talk to that particular database at a particular address.
Once all of the above is done, re-test your application to check if you get the same behavior as before. This is a usable MVP, however there are no layer of security. Anyone with the right URL could just fetch your frontend and start messing around with your data.
There is more engineering that need to be done to make this a viable end product. This leads us to my final remark: why you are not using a product like Trello, Jira, or even Github Project? If it is to save some money on not paying for a subscription I think you should factor in the cost of development, security and maintenance of this application.
Hope it helps!
One simple option is Heroku for deploy your API and your frontend application.
I am a web backend developer. In the past, I've used a lot of Python and specifically django to create custom APIs to serve data, in JSON for instance, to web frontends.
Now, I am facing the task of developing a mobile backend that needs to provides services such as push notifications, geolocating etc. I am aware of the existing mBaaS providers which could definitely address a lot of the issues with the task at hand, however, the project requires a lot of custom backend code, async tasks, algorithms to perform calculations on the data that in response trigger additional behavior, as well as an extensive back office.
Looking at the features of the popular mBaaS provider, I feel like they are not able to meet all my needs, however it would be nice to use some of their features such as push notifications, instead of developing my own. Am I completely mistaken about mBaaS providers? Is this sort of hybrid approach even possible?
Thanks!
There are a ton of options out there. Personally, I'm still looking for the holy grail of mBaaS providers. I've tried Parse, DreamFactory, and most recently Azure Mobility Services.
All three are great getting started from PoC to v1, but the devil is always in the details. There are a few details to watch out for:
You sacrifice control and for simplicity. Stay in the lanes and things should work. The moment you want to do something else is when complexity creeps in.
You are at the mercy of their infrastructure. Yes -- even Amazon and Azure go down from time to time. Note -- Dreamfactory is a self-hosted solution.
You are locked into their platform. Any extra code customizations
you make with their hooks (ie - Parse's "CloudCode" and Azure's API
scripts) will most likely not port to another platform.
Given the learning curve and tradeoffs involved I think you should just play the strong hand you already have. Why not host an Django app on Heroku? Add on DjangoRestFramework and you basically can get a mBaas up and running in less than a day.
Heroku has plenty of third party providers for things like Push notifications, Authentication mechanisms, and even search engines (Elasticsearch).
All that is required is to drop the right "pip install" code into your controllers and you are off an running.
I asked this question a few weeks ago. Today I have actually written and released a standard Django application, i.e. a fully-functional relational DB-backed (and consequently fully-functional Django admin) enabled by Google CloudSQL. The only time I had to deviate from doing things the standard Django way was to send email (had to do it the GAE way). My setup is GAE 1.6.4, Python2.7, Django 1.3 using the following in app.yaml:
libraries:
- name: django
version: "1.3"
However I do need you to suggest clear actionable steps to improve to the response time of the initial request when cold of this Django app. I have a simple webapp2 web site on GAE, which does not hit the DB, and when cold the response time is of 1.56s. The Django one, when cold, hits the DB with 2 queries (two count(*) queries over tables containing less than 300 rows each), and the response time is of 10.73s! Not encouraging for a home page ;)
Things that come to mind are to remove the middleware classes I don't need and other Django-specific optimisations. However tips that improve things also from a GAE standpoint would be really useful.
N.B. I don't want this to become a discussion about the merits of going for Django on GAE. I can mention that my personal Django expertise, and resulting development productivity, did bear considerably in adopting Django as opposed to other frameworks. Moreover with CloudSQL, it's easy to move away from GAE (hopefully not!) as the Django code will work everywhere else with little (or no) modifications. Related discussions about such topic can be found here and here.
I don't have a full answer but I'm contributing since I'd like to find a solution as well. I'm currently using a running cron job (I actually need the cron job, so it's not only for keeping my app alive).
I've seen it discussed in one of the GAE/Python/Django related mailing lists that just the time required for loading up all the Django files is significant when compared to webapp, so removing django components that you don't use from deployment should improve your startup time as well. I've been able to shave off about 3 seconds by removing certain parts of the contrib folder. I exclude them in my app.yaml.
My startup time is still around 6 seconds (full app, Django-nonrel, HRD). It used to be more like 4 when my app was simpler.
My suspicion is that Django verifies all its models on startup, and that processing time is significant. If you have time with experimenting with an app with absolutely 0 models, I'd be curious if it made any impact.
I'm also curious as to whether your two initial queries make any significant impact.
When there is no instance running, for example after version upgrade or when there is no request for 15 min, then a request will trigger loading of an instance which takes about 10s. So what you are seeing is normal.
So if your app is idle for periods longer then 15min you will see this behavior. One workaround is to have a cron job ping your instance every 10 min (though I believe google does not like that).
Update:
You can avoid this by enabling billing then in GAE admin under "Application settings" set minimum Idle Instances setting to 1. Note: the min setting is not available on free apps, only max.
I decide to write some applications using facebook and django (or even twisted, but it doesn't matter), and now I can't choose appropriate tools. I see there are many API-wrappers writed on Python exists for Facebook:
official, but seems no longer supported Python-SDK
new and actively developed, but seems too new Django-facebook
good old, but not maintained pyfacebook
simple, well-maintaned, but non-documented fandjango
some other very primitive tools
I saw some similar questions here, but I'm noticed that Facebook is periodically introduces big changes into their API and those advices may be already outdated, or may be new libraries appeared.
Also I'd like to know about most significant differences between those libraries. And of course good documentation and tutorials are welcome.
I think Django Facebook is a good choice for you. But my opinion is biased. I've written it for my startup Fashiolista.com and we run it in production. (Quite huge, so most edge cases have been resolved)
Django Facebook also include OpenFacebook, which is a python api client to the open graph protocol. It's the only python client I know which is fully up to date and actively maintained.
Have a look at:
https://github.com/tschellenbach/Django-facebook
PS.
Just released some new decorators which make it very simple to get started. These decorators are indeed very new and caused some bugs in the past days. The project itself is already a year old (since the open graph api was released) and otherwise quite stable.
http://www.mellowmorning.com/
The answer really depends on what it is you want to achieve as those APIs are pretty different.
pyfacebook - is for the older legacy API.
python-sdk - is for the "new" opengraph protocol (I wouldn't say its no longer supported as its just a thin wrapper over the facebook opengraph protocol, so supports all the new features that facebook provide instantly w/out needing dev work on the lib).
django-facebook - is a higher level than python-sdk and helps you to add facebook connection features to your site and also seems to pave the way to creating apps that live "inside" facebook rather than just helping sites that live outside facebook to get access to facebook data.
Never heard of fandjango and github seems to be down at the moment so can't comment on that.
If you just want to add user-login using facebook then something like django-socialauth might work out well for you.
If you want to start exploring the social graph then python-sdk is the way to go.
I'd also check to see if the functions you want are supported by the opengraph protocol, its improved over the last year but there is the odd thing it frustratingly doesn't support whereas the legacy api does support...
The best documentation is facebook itself, check out the graph-explorer - it's pretty fascinating...
It depends what you are trying to do. I had the same problem and ended up using django-social-auth , which lets you log in via Facebook and many other social networks.It also lets you extract the token from those networks and then use it.
For the facebook specific stuff, I use facebook-sdk, but since you have something managing the tokens, you could really replace it with any library if yours become outdated in upcoming years . It also means you can add more social networks later on.