Recommendation system for frequently changing data in MongoDB - python

I have a website built with Node.js and MongoDB. Documents are structured something like this:
{
price: 500,
location: [40.23, 49.52],
category: "A"
}
Now I want to create a recommendation system, so when a user is watching item "A" I can suggest to him/her similar items "B", "C" and "D".
The thing is collection of items is changing relatively often. New items are created every hour and they do exist only for about a month.
So my questions are:
What algorithm should I use? Cosine similarity seems to be the most suitable one.
Is there a way to create such recommendation system with Node.js or it's better to use python/R?
When similarity score must be calculated? Only once (when a new item is created) or I should recalculate it every time a user visits an item page?

What algorithm should I use? Cosine similarity seems to be the most suitable one.
No one can really answer this for you, what makes a product similar to you? this is 100% product decision, it sounds like this is more of a pet side project and in that case I'd say use whatever you'd like.
If this is not the case I would assume best recommendations would be based on purchase correlation, i.e previous Users that bought product "A" also bought (or looked) at product "B" the most, hence it should be the top recommendation. Obviously you can create a much more complex model in the future.
Is there a way to create such recommendation system with Node.js or it's better to use python/R?
If it's a basic rule based system it can be done in node with ease, for any more data science related approach it will be more natural to implement this in python/R
When similarity score must be calculated? Only once (when a new item is created) or I should recalculate it every time a user visits an item page?
Again it depends on what your score is, how many resources you can invest, what the scale is etc.
as I mentioned before It sounds like this is a personal project. If this is the case I would try and choose the simpler solution for all these questions. Once you have the entire project up and running it'll be easier to improve on.

Related

Trying to work out how to produce a synthetic data set using python or javascript in a repeatable way

I have a reasonably technical background and have done a fair bit of node development, but I’m a bit of a novice when it comes to statistics and a complete novice with python, so any advice on a synthetic data generation experiment I’m trying my hand at would be very welcome :)
I’ve set myself the problem of generating some realistic(ish) sales data for a bricks and mortar store (old school, I know).
I’ve got a smallish real-world transactional dataset (~500k rows) from the internet that I was planning on analysing with a tool of some sort, to provide the input to a PRNG.
Hopefully if I explain my thinking across a couple of broad problem domains, someone(s?!) can help me:
PROBLEM 1
I think I should be able to use the real data I have to either:
a) generate a probability distribution curve or
b) identify an ‘out of the box’ distribution that’s the closest match to the actual data
I’m assuming there’s a tool or library in Python or Node that will do one or both of those things if fed the data and, further, give me the right values to plug in to a PRNG to produce a series of data points that not are not only distributed like the original's, but also within the same sort of ranges.
I suspect b) would be less expensive computationally and, also, better supported by tools - my need for absolute ‘realness’ here isn’t that high - it’s only an experiment :)
Which leads me to…
QUESTION 1: What tools could I use to do do the analysis and generate the data points? As I said, my maths is ok, but my statistics isn't great (and the docs for the tools I’ve seen are a little dense and, to me at least, somewhat impenetrable), so some guidance on using the tool would also be welcome :)
And then there’s my next, I think more fundamental, problem, which I’m not even sure how to approach…
PROBLEM 2
While I think the approach above will work well for generating timestamps for each row, I’m going round in circles a little bit on how to model what the transaction is actually for.
I’d like each transaction to be relatable to a specific product from a list of products.
Now the products don’t need to be ‘real’ (I reckon I can just use something like Faker to generate random words for the brand, product name etc), but ideally the distribution of what is being purchased should be a bit real-ey (if that’s a word).
My first thought was just to do the same analysis for price as I’m doing for timestamp and then ‘make up’ a product for each price that’s generated, but I discarded that for a couple of reasons: It might be consistent ‘within’ a produced dataset, but not ‘across’ data sets. And I imagine on largish sets would double count quite a bit.
So my next thought was I would create some sort of lookup table with a set of pre-defined products that persists across generation jobs, but Im struggling with two aspects of that:
I’d need to generate the list itself. I would imagine I could filter the original dataset to unique products (it has stock codes) and then use the spread of unit costs in that list to do the same thing as I would have done with the timestamp (i.e. generate a set of products that have a similar spread of unit cost to the original data and then Faker the rest of the data).
QUESTION 2: Is that a sensible approach? Is there something smarter I could do?
When generating the transactions, I would also need some way to work out what product to select. I thought maybe I could generate some sort of bucketed histogram to work out what the frequency of purchases was within a range of costs (say $0-1, 1-2$ etc). I could then use that frequency to define the probability that a given transaction's cost would fall within one those ranges, and then randomly select a product whose cost falls within that range...
QUESTION 3: Again, is that a sensible approach? Is there a way I could do that lookup with a reasonably easy to understand tool (or at least one that’s documented in plain English :))
This is all quite high level I know, but any help anyone could give me would be greatly appreciated as I’ve hit a wall with this.
Thanks in advance :)
The synthesised dataset would simply have timestamp, product_id and item_cost columns.
The source dataset looks like this:
InvoiceNo,StockCode,Description,Quantity,InvoiceDate,UnitPrice,CustomerID,Country
536365,85123A,WHITE HANGING HEART T-LIGHT HOLDER,6,12/1/2010 8:26,2.55,17850,United Kingdom
536365,71053,WHITE METAL LANTERN,6,12/1/2010 8:26,3.39,17850,United Kingdom
536365,84406B,CREAM CUPID HEARTS COAT HANGER,8,12/1/2010 8:26,2.75,17850,United Kingdom
536365,84029G,KNITTED UNION FLAG HOT WATER BOTTLE,6,12/1/2010 8:26,3.39,17850,United Kingdom
536365,84029E,RED WOOLLY HOTTIE WHITE HEART.,6,12/1/2010 8:26,3.39,17850,United Kingdom
536365,22752,SET 7 BABUSHKA NESTING BOXES,2,12/1/2010 8:26,7.65,17850,United Kingdom
536365,21730,GLASS STAR FROSTED T-LIGHT HOLDER,6,12/1/2010 8:26,4.25,17850,United Kingdom
536366,22633,HAND WARMER UNION JACK,6,12/1/2010 8:28,1.85,17850,United Kingdom

Python - Using pandas with reinforcement learning

I would like to create in python some RL algorithm where the algorithm would interact with a very big DataFrame representing stock prices. The algorithm would tell us: Knowing all of the prices and price changes in the market, what would be the best places to buy/sell (minimizing loss maximizing reward). It has to look at the entire DataFrame each step (or else it wouldn't have the entire information from the market).
Is it possible to build such algorithm (which works relatively fast on a large df)? How should it be done? What should my environment look like and which algorithm (specifically) should I use for this type of RL and which reward system? Where should I start
I think you are a little confused with this .what I think you want to do is to check whether the stock prices of a particular company will go up or not or the stock price of which company will shoot up where you already have a dataset regarding the problem statement.
about RL it does not work on any dataset it's a technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences.
you can check this blog for some explanation don't get confused.
https://towardsdatascience.com/types-of-machine-learning-algorithms-you-should-know-953a08248861

Using a Decision Tree to build a Recommendations Application

First of all, my apologies if I am not following some of the best practices of this site, as you will see, my home is mostly MSE (math stack exchange).
I am currently working on a project where I build a vacation recommendation system. The initial idea was somewhat akin to 20 questions: We ask the user certain questions, such as "Do you like museums?", "Do you like architecture", "Do you like nightlife" etc., and then based on these answers decide for the user their best vacation destination. We answer these questions based on keywords scraped from websites, and the decision tree we would implement would allow us to effectively determine the next question to ask a user. However, we are having some difficulties with the implementation. Some examples of our difficulties are as follows:
There are issues with granularity of questions. For example, to say that a city is good for "nature-lovers" is great, but this does not mean much. Nature could involve say, hot, sunny and wet vacations for some, whereas for others, nature could involve a brisk hike in cool woods. Fortunately, the API we are currently using provides us with a list of attractions in a city, down to a fairly granular level (for example, it distinguishes between different watersport activities such as jet skiing, or white water rafting). My question is: do we need to create some sort of hiearchy like:
nature-> (Ocean,Mountain,Plains) (Mountain->Hiking,Skiing,...)
or would it be best to simply include the bottom level results (the activities themselves) and just ask questions regarding those? I only ask because I am unfamiliar with exactly how the classification is done and the final output produced. Is there a better sort of structure that should be used?
Thank you very much for your help.
I think using a decision tree is a great idea for this problem. It might be an idea to group your granular activities, and for the "nature lovers" category list a number of different climate types: Dry and sunny, coastal, forests, etc and have subcategories within them.
For the activities, you could make a category called watersports, sightseeing, etc. It sounds like your dataset is more granular than you want your decision tree to be, but you can just keep dividing that granularity down into more categories on the tree until you reach a level you're happy with. It might be an idea to include images too, of each place and activity. Maybe even without descriptive text.
Bins and sub bins are a good idea, as is the nature, ocean_nature thing.
I was thinking more about your problem last night, TripAdvisor would be a good idea. What I would do is, take the top 10 items in trip advisor and categorize them by type.
Or, maybe your tree narrows it down to 10 cities. You would rank those cities according to popularity or distance from the user.
I’m not sure how to decide which city would be best for watersports, etc. You could even have cities pay to be top of the list.

Can python compare user input with variables in program?

This is my first post here, hope it is easily readable and also my answer is asked! (:
First of all, to put you a bit into perspective, I wanted to create an efficiency calculator for the brand new game No Man's Sky. The economy in that game is pretty much the same as in real life. You can sell items at a much higher price than the components needed to create it. E.g: You can sell a rock for 5000€ or 30 rock parts for 30€ each. If you craft the rock with the rock parts your profit will be 5000-900, right? (:
Here is the code.
What I want it to do is the following. User enters a product, the program compares the price of the product if sold and the price of the components to craft the product and shows you the profit doing so.
I have the following questions about it:
Is there a better way to save the data to use it after? (lines 1-16)
Is there any way to compare all variables I will create (line 18) or do I have to create an if loop for every product (lines 22-24). What I mean is something like
profit = products[input] - input_recipe
print profit
Since I want to check a lot of recipes, it would be a pain in the ass if there's a better for to do it.
How you save the data and access it will really depend on how you want to handle your calculator. I would say the best way to handle this would be if there were an excel file or JSON file or something of the sort that is all inclusive of all materials and items of the game (you may have to be the one to make this or someone else may already have). In the event you have to put the list together yourself, it could be a long process and very annoying, so try to find a list somewhere you can download then open the file and parse the data as needed. You could put all the data in the code itself but that doesn't allow you to write code against the data with say a different language if you so desired.
As far as loops are concerned, I'm not sure what you mean by that? You have dictionaries for your data so there's no need to loop over every value right? Now if you are referring to taking in multiple user inputs, a loop wouldn't be a bad idea for command line:
continue_calculations = 'y'
while continue_calculations != 'n':
# Do your logic here.
continue_calculations = raw_input('Would you like to continue(y/n)?')
Of course if you are making a calculator you could look into GUI development, or web development if you want to make it into a site. PyQT is a handy little module to work in and there are some good tutorials for that: https://pythonprogramming.net/basic-gui-pyqt-tutorial/
Cheers,
About your first question, another way would be to use json format to store your data and not to use three separate dictionaries, I mean something like:
data = {"elements":{"Th":20.6,"Pu":41.3},"alloys":{"Aronium":1546.9,"Herox":2877.5},"Products":{"Antimatter":5232,"Warp Cell":46750}}
You could parse for example "Th" price by writing:
th_price = data['elements']['Th']
As for your second question you could create a fourth dictionary that would contain the prices of all the possible recipes which of course you have predefined - just not to compute them every time you need them and to have them available for fast parsing. So you would write something like:
profit = products[input] - input_recipe[input]
print profit
where input_recipe would be your fourth dictionary with the recipe prices.

Good algorithm to find themes in tweets ranked by follower counts?

I'm new to data mining and experimenting a bit.
Let's say I have N twitter users and what I want to find
is the overall theme they're writing about (based on tweets).
Then I want to give higher weight to each theme if that user has higher followers.
Then I want to merge all themes if there're similar enough but still
retain the weighting by twitter count.
So basically a list of "important" themes ranked by authority (user's twitter count)
For instance, like news.google.com but ranking would be based on twitter followers that are responsible for theme.
I'd prefer something in python since that's the language I'm most familiar with.
Any ideas?
Thanks
EDIT:
Here's a good example of what I'm trying to do (but with diff data)
http://www.facebook.com/notes/facebook-data-team/whats-on-your-mind/477517358858
Basically analyzing various data and their correlation to each other: work categories and each persons age or word categories and friend count as in this example.
Where would I begin to solve this and generate such graphs?
Generally speaking : R has some packages specifically directed at text mining and datamining, offering a wide range of techniques. I have no knowledge of that kind of packages in Python, but that doesn't mean they don't exist. I just wouldn't implement it all myself, it's a bit more complicated than it looks at first sight.
Some things you have to consider :
define "theme" : Is that the tags they use? Do you group tags? Do you have a small list with a limited set, or is the set unlimited?
define "general theme" : Is that the most used theme? How do you deal with ties? If a user writes about 10 themes about as much, what then?
define "weight" : Is that equivalent to the number of users? The square root? Some category?
If you have a general idea about this, you can start using the tm package for extracting all the information in a workable format. The package is based on matrices, and metadata objects. These allow you to get weighted frequencies for the different themes, provided you have defined what you consider a theme. You can also use different weighting functions to obtain what you want. The manual is here. But please also visit crossvalidated.com for extra guidance if you're not sure about what you're doing. This is actually more a question about data mining than it is about programming.
I have no specific code but I believe the methodology you want to employ is TF-IDF. It is explained here: http://en.wikipedia.org/wiki/Tf%E2%80%93idf and is used quote often classify text.

Categories

Resources