I just got enrolled in a program that starts me in researches about new Networks Protocols, and my first assignment is to learn about Discrete-Event Simulation.
It was recommended 2 books:
Simulating Computer Systems: Techniques and Tools by Myron H. MacDougall
Simulation Model Design and Execution: Building Digital Worlds by Paul Fishwick
Both books use tools that I won't be using in particularly but I was told it is a good book to learn the basics of simulation of discrete-events.
But, as it happens, the MacDougall's book isn't really available in any other store except amazon dot com, and would take 2 months to deliver it to my address. And the Fishwick's book would cost a fortune that i'm not willing to spend right now.
Now, I come here to ask which books are used today to learn discrete-event simulation that are similar to those?
P.S.: I will be using the SimPy simulation tool based on Python.
There are many excellent resources for learning about discrete-event simulation. Probably the top selling book of the last 35 years has been "Simulation Modeling and Analysis" by Law. Other fine choices include "Discrete-Event System Simulation" by Banks, Carson, & Nelson, "Principles of Discrete-Event Simulation" by Fishman, "Discrete-Event Simulation: A First Course" by Leemis & Park, or "Graphical Simulation Modeling and Analysis Using SIGMA for Windows" by Schruben, to name a few.
Another alternative would be to go to the Winter Simulation Conference Archive, pick any year, and check out the articles in either the introductory or advanced tutorials. I'd particularly recommend Schriber's Inside Discrete-Event Simulation Software: How It Works and Why It Matters series, and Fundamentals of Simulation Modeling by Sanchez.
For those of you who still stumble upon this question while looking for resources, apart from all the books mentioned in pjs's answer (out of which I found "Discrete-Event System Simulation" by Banks, Carson, Nelson & Nicol and "Simulation Modeling and Analysis" by Law & Kelton to be most helpful) 'Simulation' by Ross is another very useful resource.
I'd like to recommend my own book, which is focussed on designing and implementing simulation models, unlike most classical "Discrete-Event System Simulation" textbooks, which are focussed on statistics and operations research.
Discrete Event Simulation Engineering - How to design discrete event simulations with DPMN and implement them with OESjs, Simio and AnyLogic
It's still a draft version, the HTML version is Open Access (free).
Related
I would like to create in python some RL algorithm where the algorithm would interact with a very big DataFrame representing stock prices. The algorithm would tell us: Knowing all of the prices and price changes in the market, what would be the best places to buy/sell (minimizing loss maximizing reward). It has to look at the entire DataFrame each step (or else it wouldn't have the entire information from the market).
Is it possible to build such algorithm (which works relatively fast on a large df)? How should it be done? What should my environment look like and which algorithm (specifically) should I use for this type of RL and which reward system? Where should I start
I think you are a little confused with this .what I think you want to do is to check whether the stock prices of a particular company will go up or not or the stock price of which company will shoot up where you already have a dataset regarding the problem statement.
about RL it does not work on any dataset it's a technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences.
you can check this blog for some explanation don't get confused.
https://towardsdatascience.com/types-of-machine-learning-algorithms-you-should-know-953a08248861
Lately I am doing a research with purpose of unsupervised clustering of a huge texts database. Firstly I tried bag-of-words and then several clustering algorithms which gave me a good result, but now I am trying to step into doc2vec representation and it seems to not be working for me, I cannot load prepared model and work with it, instead training my own doesnt prove any result.
I tried to train my model on 10k texts
model = gensim.models.doc2vec.Doc2Vec(vector_size=500, min_count=2, epochs=100,workers=8)
(around 20-50 words each) but the similarity score which is proposed by gensim like
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
is working much worse than the same for Bag-of-words with my model.
By much worse i mean that identical or almost identical text have similarity score compatible to text which dont have any connection i can think about. So i decided to use model from Is there pre-trained doc2vec model? to use some pretrained model which might have more connections between words. Sorry for somewhat long preambula but the question is how do i plug it in? Can someone provide some ideas how do i, using the loaded gensim model from https://github.com/jhlau/doc2vec convert my own dataset of text into vectors of same length? My data is preprocesssed (stemmed, no punctuation, lowercase, no nlst.corpus stopwords)and i can deliver it from list or dataframe or file if needed, the code question is how to pass my own data to pretrained model? Any help would be appreciated.
UPD: outputs that make me feel bad
Train Document (6134): «use medium paper examination medium habit one
week must chart daily use medium radio television newspaper magazine
film video etc wake radio alarm listen traffic report commuting get
news watch sport soap opera watch tv use internet work home read book
see movie use data collect journal basis analysis examining
information using us gratification model discussed textbook us
gratification article provided perhaps carrying small notebook day
inputting material evening help stay organized smartphone use note app
track medium need turn diary trust tell tell immediately paper whether
actually kept one begin medium diary soon possible order give ample
time complete journal write paper completed diary need write page
paper use medium functional analysis theory say something best
understood understanding used us gratification model provides
framework individual use medium basis analysis especially category
discussed posted dominick article apply concept medium usage expected
le medium use cognitive social utility affiliation withdrawal must
draw conclusion use analyzing habit within framework idea discussed
text article concept must clearly included articulated paper common
mistake student make assignment tell medium habit fail analyze habit
within context us gratification model must include idea paper»
Similar Document (6130, 0.6926988363265991): «use medium paper examination medium habit one week must chart daily use medium radio
television newspaper magazine film video etc wake radio alarm listen
traffic report commuting get news watch sport soap opera watch tv use
internet work home read book see movie use data collect journal basis
analysis examining information using us gratification model discussed
textbook us gratification article provided perhaps carrying small
notebook day inputting material evening help stay organized smartphone
use note app track medium need turn diary trust tell tell immediately
paper whether actually kept one begin medium diary soon possible order
give ample time complete journal write paper completed diary need
write page paper use medium functional analysis theory say something
best understood understanding used us gratification model provides
framework individual use medium basis analysis especially category
discussed posted dominick article apply concept medium usage expected
le medium use cognitive social utility affiliation withdrawal must
draw conclusion use analyzing habit within framework idea discussed
text article concept must clearly included articulated paper common
mistake student make assignment tell medium habit fail analyze habit
within context us gratification model must include idea paper»
This looks perfectly ok, but looking on other outputs
Train Document (1185): «photography garry winogrand would like paper
life work garry winogrand famous street photographer also influenced
street photography aim towards thoughtful imaginative treatment detail
referencescite research material academic essay university level»
Similar Document (3449, 0.6901006698608398): «tang dynasty write page
essay tang dynasty essay discus buddhism tang dynasty name artifact
tang dynasty discus them history put heading paragraph information
tang dynasty discussed essay»
Shows us that the score of similarity between two exactly same texts which are the most similar in the system and two like super distinct is almost the same, which makes it problematic to do anything with the data.
To get most similar documents i use
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
The models from https://github.com/jhlau/doc2vec are based on a custom fork of an older version of gensim, so you'd have to find/use that to make them usable.
Models from a generic dataset (like Wikipedia) may not understand the domain-specific words you need, and even where words are shared, the effective senses of those words may vary. Also, to use another model to infer vectors on your data, you should ensure you're preprocessing/tokenizing your text in the same way as the training data was processed.
Thus, it's best to use a model you've trained yourself – so you fully understand it – on domain-relevant data.
10k documents of 20-50 words each is a bit small compared to published Doc2Vec work, but might work. Trying to get 500-dimensional vectors from a smaller dataset could be a problem. (With less data, fewer vector dimensions and more training iterations may be necessary.)
If your result on your self-trained model are unsatisfactory, there could be other problems in your training and inference code (that's not shown yet in your question). It would also help to see more concrete examples/details of how your results are unsatisfactory, compared to a baseline (like the bag-of-words representations you mention). If you add these details to your question, it might be possible to offer other suggestions.
Context : the idea is to apply different loads on a sample/specimen made out of woven materials/fabrics and simulate their behavior computationally.
Execution : There are a number of tasks I am required to do. At this point I am only looking for suggestions since I am a beginner in python(not programming). I want to create a woven pattern used in textile manufacturing (atlas weaving) and this pattern is to be described using lines. Each line is to have material properties based on the material/matrix ratio of the fibres.
Recommendation : What python libraries should I look at to accomplish this?
I am looking for data sets and tutorials which are specifically targeting business data analysis issues. I know about Kaggle but it's main focus is on Machine learning and associated problems/issues. Would be great to know a blog or dump regarding data analysis issues. Or may be a good read/book?
The correct answer to this all depends on how comfortable you are currently with machine learning. Business data analysis and predictions are so closely aligned with machine learning that most developers consider it a subset that more general ML skills will cover. So I will suggest two things to you. If you have no experience in ML launch into the Data Science(python) career track of Data camp - It is excellent! This will help you get to grips with the overall ideas of cleaning your data and data processing, as well as supervised and unsupervised learning.
If you are already comfortable with all that I would suggest looking at pbpython.com - This site covers python for business analysis use entirely and suggests a plethora of books specialized for certain topics. As well as covering individual topics itself very well.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a project proposal for music lovers who have no knowledge in audio processing. I think the project is interesting, but I don't have clear picture on how to implement it.
The project proposal: Some people like singing, but they cannot find appropriate musical accompaniment (background music). People who can play guitar, they may sing with playing guitar (the rhythm provided by guitar is background music). The project is to achieve the similar result like playing guitar for people singing.
I think to implement this project, the following components are required:
Musical knowledge (how guitar plays as background music (maybe simple pattern will work))
signal/audio processing
Key detection
Beat detection
Chord matching
Is there any other component I missed to achieve my purpose? Any libraries can help me? The project is supposed to be completed in 1.5 month. Is it possible? (I just expect it to work like guitar beginners playing background music). For development languages, I will not use c/c++. Currently my favorite is python, but possibly use other programming language as long as it can help simplify the implementation process.
I have no musical background and just studies very basic audio processing. Any suggestions or comments are apprietiated.
Edited Information:
I tried to search auto accompaniment, and there are some software. I didn't find any open source project for it, I want to know the details on how it process audio information. If you know any open source project about it , please share your knowledge, thank you.
You might start by considering what a guitarist would have to do to successfully accompany a singer singing in a situation where that they have no prior knowledge of the key, chord progression, or rhythm of the song (not to mention its structure, style, etc.)
Doing this in real-time in a situation where the accompanist (human or computer) has not heard the song before will be difficult, as it will take some time to analyse what's being sung in order to make appropriate musical choices about the accompaniment. A guitarist or other musician having this ability in the real world would be considered highly skilled.
It sounds like a very challenging project for 1.5 months if you have no musical background. 'maybe simple pattern will work' - maybe, but there are a huge number of simple patterns possible!
Less ambitious projects might be:
record a whole song and analyse it, then render a backing (still a
lot of work!)
to create a single harmony line or part, in the same
way that vocal harmoniser effects do
generating a backing based on a
chord progression input by the user
Edit in reply to your first comment:
If you wanted to generate a full accompaniment, you will need to (as you say) deal with both the key and chord progression, and the timing (including time signature and detecting which beat of the bar is 'beat 1')
Getting this level of timing information this may be difficult, as beat detection from voice only is not going to be possible using the standard techniques used to get beat from a song (looking for amplitude peaks in certain frequency ranges).
You might still get good results by not caculating timing at all, and simply playing your chords in time with the start of the sung notes (or a subset of them).
All you would then need to do is
detect the notes. This post is about detecting pitch in python: Python frequency detection. Amplitude detection is more straightforward.
come up with an algorithm for working out the root note of the piece (and - more ambitiously - places where it changes). In some cases it may be hard to discern from the melody alone. You could start by assuming that the first note or most common note is the root.
come up with an algorithm for generating a chord progression (do a web search for 'harmonising a melody'). Obviously there is no objectively right or wrong way to do this and you will likely only be able to do this convincingly for a limited range of styles. You might want to start by assuming a limited subset of chords, e.g. I, IV, V. These should work on most simple 'nursery rhyme' style tunes.
Of course if you limit yourself to simple tunes that start on beat one, you might have an easier time working out the time signature. In general I think your route to success will be to try to deal with the easy cases first and then build on that.