We have a number of Python services, many of which use Nginx as a reverse proxy. Right now, we examine requests in real time by tailing the logs found in /var/log/nginx/access.log. I want to make these logs publicly readable in aggregate on a webserver so people don't have to SSH into individual machines.
Our current infrastructure has fluentd (a tool similar to logstash I'm told) sending logs up to a centralized stats server, which has Elasticsearch and kibana installed, with the idea being that kibana would serve as the frontend for viewing our logs.
I know nothing about these services. If I wanted to view our logs in realtime, would this stack even be feasible? Can Elasticsearch provide realtime data with a mere second's delay? Does kibana have out-of-the-box functionality for automatically updating the page as new log data comes in (i.e., does it have a socket connection with elasticsearch? Or am I falling into the wrong toolset?
Kibana is just an interface on top of elastic search. It talks directly to elasticsearch, so the data on it is as realtime as the data you are feeding into elasticsearch. In other words, its only as good as your collectors (fluentd in your case).
It works by having you define time series which it uses to query data from elastic search, and then you can have it always search for keywords and then visualize that data.
If by "realtime" you mean that you want the graphs to move/animate - this is also possible (its called "streaming dashboards"); but that's not the real power of kibana - the real power is a very rich query engine, drill down into time series, do calculations (top x over period y).
If all you want is a nice visual/moving thing to place on the wall tv - this is possible with kibana, but keep in mind you'll have to store everything in elasticsearch so unless you plan on doing some other analysis, you'll have to adjust your configuration. For example, have a really short TTL for the messages so once they are visualized, they are no longer available; or filter fluentd to only send across those events that you want to plot. Otherwise you'll have a disk space problem.
If that is all that you want, it would be a easier to grab some javascript charting library and use that in your portal.
I have the "access.log (or other logs) - logstash (or other ES indexer) - Kibana" pipeline setup for a number of services and logs and it works well. In our case it has more than a second of delay but that's because of buffering in logs or the ES indexer, not because of Kibana/ES itself.
You can setup Kibana to show only the last X minutes of data and refresh every Y seconds, which gives a decent real-time feel - and looks good on TVs ;)
Keep in mind that Kibana can sometimes issue pretty bad queries which can crash your ES cluster (although this seems to have vastly improved in more recent ES and Kibana versions), so do not rely on this as a permanent data store for your logs, and do not share the ES cluster you use for Kibana with apps that have stronger HA requirements.
As Burhan Khalid pointed out, this setup also gives us the ability to drill down and study specific patterns in details, which is super useful ("What's this spike on this graph?" - zoom in, add a couple filters, look at a few example log lines, filter again - mystery solved). I think saving us from having to dig somewhere else to get more details when we see something weird is actually the best part of this solution.
Related
I’m trying to build a simple pick up and delivery app. I want a user to submit a pickup location and delivery location.
Then as soon as the user submits I want to be able to get the locations of the riders’ apps but I don’t know how to go about it. It’s just like a normal for example uber app that searches the drivers locations and calculates the nearest one.
Calculating the nearest one is not the issue as I can do that with google maps api, but how can I get the riders app location from the backend.? Thank you in advance.
there is a good plugin that does a lot of the heavy lifting here:
https://pub.dev/packages/location
The nice thing about this plugin is that it handles a lot of the permission stuff for you. (although you still have to add NSLocationWhenInUseUsageDescription and
NSLocationAlwaysUsageDescription to the Info.plist for iOS)
If, then, you want to read data from Device A on Device B, the data flowpath would look like this:
- Device A gets own location
- Device A uploads location to database (such as firebase)
- Device B queries, or directly receives data from firebase
- Device B reads location data from Device A.
I would recommend using firebase for this transfer, since it's free to get it off the ground. A good place to start would be here: https://www.youtube.com/watch?v=m9uVcubnVfc
So I'm trying to get data from an API which has a max call limit of 60/min. If I go over this it will return a response [429] too many requests.
I thought maybe a better way was to keep requesting the API until I get a response [200] but I am unsure how to do this.
import requests
r = requests.get("https://api.steampowered.com/IDOTA2Match_570/GetTopLiveGame/v1/?key=" + key + "&partner=0")
livematches = json.loads(r.text)['game_list']
So usually it runs but if it returns anything other than a response [200], my program will fail and anyone who uses the website won't see anything.
Generally with cases like this, I would suggest adding a layer of cache indirection here. You definitely don't want (and can't) try solving this using frontend like you suggested since that's not what the frontend of your app is meant for. Sure, you can add the "wait" ability, but all someone has to do is pull up Chrome Developer tools to grab your API key and then call that as many times as they want. Think of it like this: say you have a chef that can only cook 10 things per hour. Someone can easily come into the restaurant, order 10 things, and then nobody else can order anything for the next hour, which isn't fair. Instead, adding a "cache" layer, which means that you only call the steam API every couple seconds. If 5 people request your site within, say, 5 seconds, you only go to the steam API on the first call, then you save that response. To the other 4 (and anyone who comes within those next few seconds), you return a "cached" version.
The two main reasons for adding a cache API layer are the following:
You would stop exposing key from the frontend. You never want to expose your API key to the frontend directly like this since anyone could just take your key and run many requests and then bam, your site is now down (denial of service attack is trivial). You would instead have the users hit your custom mysite.io/getLatestData endpoint which doesn't need the key since that would be securely stored in your backend.
You won't run into the rate limiting issue. Essentially if your cache only hits the API once every minute, you'll not run into any time where users can't access your site due to an API limit, since it'll return cached results to the user.
This may be a bit tricky to visualize, so here's how this works:
You write a little API in your favorite server-side language. Let's say NodeJS. There are lots of resources for learning the basics of ExpressJS. You'd write an endpoint like /getTopGame that is attached to a cache like redis. If there's a cached entry in the redis cache, return that to the user. If not, go to the steam API and get the latest data. Store that with an expiration of, say, 5 seconds. Boom, you're done!
It seems a bit daunting at first but as you said being a college student, this is a great way to learn a lot about how backend development works.
I'm brand new to using the Elastic Stack so excuse my lack of knowledge on the subject. I'm running the Elastic Stack on a Windows 10, corporate work computer. I have Git Bash installed for a bash cli, and I can successfully launch the entire Elastic Stack. My task is to take log data that is stored in one of our databases and display it on a Kibana dashboard.
From what my team and I have reasoned, I don't need to use Logstash because the database that the logs are sent to is effectively our 'log stash', so to use the Logstash service would be redundant. I found this nifty diagram
on freecodecamp, and from what I gather, Logstash is just the intermediary for log retrieval different services. So instead of using Logstash, since the log data is already in a database, I could just do something like this
USER ---> KIBANA <---> ELASTICSEARCH <--- My Python Script <--- [DATABASE]
My python script successfully calls our database and retrieves the data, and a function that molds the data into a dict object (as I understand, Elasticsearch takes data in a JSON format).
Now I want to insert all of that data into Elasticsearch - I've been reading the Elastic docs, and there's a lot of talk about indexing that isn't really indexing, and I haven't found any API calls I can use to plug the data right into Elasticsearch. All of the documentation I've found so far concerns the use of Logstash, but since I'm not using Logstash, I'm kind of at a loss here.
If there's anyone who can help me out and point me in the right direction I'd appreciate it. Thanks
-Dan
You ingest data on elasticsearch using the Index API, it is basically a request using the PUT method.
To do that with Python you can use elasticsearch-py, the official python client for elasticsearch.
But sometimes what you need is easier to be done using Logstash, since it can extract the data from your database, format it using many filters and send to elasticsearch.
I deployed and ran my app at GAE a few hours ago. It still fails because it needs to order certain datastore item, and the index needed for that is still not generated by GAE. So at the point of .order() it throws me a NeedIndexError. How long is this gonna take?
I've been doing this same procedure for 10~ GAE apps in the past and it has never, to my memory, been this slow. (Ok, it has been slow...)
The page "Datastore Indexes" in the old console just says "You have not created indexes for this application.".
The new console says nothing. It just displays a "blue alert" as if I'm not waiting myself to death already. The message in the alert is:
Cloud Datastore queries are powered by indexes, scalable data
structures that are updated in real time as property values change.
Your project's datastore index configuration specifies the indexes it
needs to support its queries. Cloud Datastore builds new indexes as
needed when you deploy index configuration. You can inspect the ready
state of your app's indexes using this console.
(ie a joke)
What am I supposed to do?
Update:
here's the index.yaml:
indexes:
# AUTOGENERATED
- kind: Mjquizinfo
ancestor: yes
properties:
- name: version
direction: desc
FWIW, in some cases (multi-module apps for example) the plain appcfg.py update used to deploy the app code might not update the index.yaml file.
Try specifically updating the index using appcfg.py update_indexes - you should be able to see the index info in the Developer Console right away (it may still take a while for indexing to be performed and become effective).
I'm not seeing much documentation on this. I'm trying to get an image uploaded onto server from a URL. Ideally I'd like to make things simple but I'm in two minds as to whether using an ImageField is the best way or simpler to simply store the file on the server and display it as a static file. I'm not uploading anyfiles so I need to fetch them in. Can anyone suggest any decent code examples before I try and re-invent the wheel?
Given an URL say http://www.xyx.com/image.jpg, I'd like to download that image to the server, put it into a suitable location after renaming. My question is general as I'm looking for examples of what people have already done. So far I just see examples relating to uploading images, but that doesn't apply. This should be a simple case and I'm looking for a canonical example that might help.
This is for uploading an image from the user: Django: Image Upload to the Server
So are there any examples out there that just deal with the process of fetching and image and storing on the server and/or ImageField.
Well, just fetching an image and storing it into a file is straightforward:
import urllib2
with open('/path/to/storage/' + make_a_unique_name(), 'w') as f:
f.write(urllib2.urlopen(your_url).read())
Then you need to configure your Web server to serve files from that directory.
But this comes with security risks.
A malicious user could come along and type a URL that points nowhere. Or that points to their own evil server, which accepts your connection but never responds. This would be a typical denial of service attack.
A naive fix could be:
urllib2.urlopen(your_url, timeout=5)
But then the adversary could build a server that accepts a connection and writes out a line every second indefinitely, never stopping. The timeout doesn’t cover that.
So a proper solution is to run a task queue, also with timeouts, and a carefully chosen number of workers, all strictly independent of your Web-facing processes.
Another kind of attack is to point your server at something private. Suppose, for the sake of example, that you have an internal admin site that is running on port 8000, and it is not accessible to the outside world, but it is accessible to your own processes. Then I could type http://localhost:8000/path/to/secret/stats.png and see all your valuable secret graphs, or even modify something. This is known as server-side request forgery or SSRF, and it’s not trivial to defend against. You can try parsing the URL and checking the hostname against a blacklist, or explicitly resolving the hostname and making sure it doesn’t point to any of your machines or networks (including 127.0.0.0/8).
Then of course, there is the problem of validating that the file you receive is actually an image, not an HTML file or a Windows executable. But this is common to the upload scenario as well.