As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm going to develop a 3D game, that a player walks in a maze with a 3D first-person perspective, collects things and escapes a monster. The game itself is very simple, but as it is not for entertainment, but for a biological experiment, so it has some specific features:
We will project the graphics to a spherical screen with 3 projectors, so the graphics should be in a fisheye trasformation, and be easily further transformable (to deal with the merging between projectors).
There should be a functionality to record data, like the path of the player, and the time points when the monster appears etc. All the events should be recordable.
The game program could interact with external devices via USB. For example, whenever the player press a certain key, the program will tell an Arduino board to do something.
As my invesigation, I found three candidates of tool chain to develop such a game:
Develop a MOD on Quake3 engine + Fisheye Quake. The problem I think would be that the Quake3 runs with a virtual machine, so that is it possible to implement the feature 2 and 3 above?
Panda3D + FisheyeLens API
PyOpenGL. This is the most flexible way, but with the greatest workload I think.
I'm quite familiar with C/C++/Python, but this is my first time to develop a 3D game. My question is which tool chain is fittest for this project (or any other good options) ? What problem would I encounter?
As there's no answer yet, let me give my own thoughs here. I have no experience on 3D development and don't know if these technology would work. For some side reasons, I prefer Panda3D.
Please note that I'm still open to accept other answers.
Why Panda3D?
It is well-documented. This is always an import factor to choose technology.
It's in Python, which is a plus for me.
It's general-purposed (than Quake3). Panda3D can also be used on robotics simulator etc. which would be useful for me.
Why not Quake3?
It runs on virtual machine (QVM). I'm afraid that it's hard to record data or access external device.
At first I thought of Quake3 because it's at least a very classical code to read, but currently I realized that Quake3 can not do anything else except FPS-like game. If it worth reading, just read it, but not have to work with it.
It's not that flexible to transform the rendering of graphics. It's a game engine for purpose, not a general graphics processing library.
Why not OpenGL?
At this moment I think the Panda3D is bottom enough. If it's proved to be not as flexible as I need, I would turn to OpenGL then.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
As a mechanical engineer how I start to learn python and what I need to learn in python For machine learning in mechanical field
Can anyone Suggest best
Well, there are plenty of the free courses on YouTube. Personally, I am not in the machine learning field.
However, to get started in Python, I would suggest the following Python video: https://www.youtube.com/watch?v=rfscVS0vtbw
This is the same video I used when I first got started in Python.
Once you have the basics down, I would suggest you do some beginner projects just to brush up on the skills you have learned. Some of the ones I recommend for total beginners are programs like Tic-Tac-Toe, Hangman, and Random code generators, or even a program that takes a string and turns it into a simple code (like making every letter "a" into "b" and "b" into "c"). Needless to say, there are many, many simple beginner projects. Do a couple of these (around 3-4). This will get you used to Python (remember, you are just scratching the surface).
Next, I would suggest you take a dive into numpy and pandas modules in Python. These two modules are very important for handling data (and that is critical for machine learning). I am not proficient in numpy or pandas, but I did look into them at one point. For numpy, I watched the following video: https://www.youtube.com/watch?v=QUT1VHiLmmI
and for pandas I watched videos by the YouTube channel named Corey Schafer that were about pandas.
Now, if you do not have knowledge of the math behind machine learning, you can one of two choices. Some of the machine learning tutorials I am suggesting will allow you to make progress in learning machine learning, but you will definitely not understand the underlying concepts (you will not know what each line of code does, what each block of code is meant to be, but you will not understand the why). You can choose to skip this next step, but it is important to know that it is critical for success to understand the underlying mechanisms of something rather than just the surface (as with everything else). So, I would strongly suggest you look into machine learning courses that simply teach theory rather than practice as well: https://www.youtube.com/watch?v=GwIo3gDZCVQ
It is important for me to point out that these resources more than likely will not be enough, as you watch these videos, you will end up having questions that they may not go over, so you will have to do a little bit of research on the internet yourself as well
After you get those down, you can then actually look into Machine Learning with Python.
I believe there are two main ways you can practice Machine Learning: TensorFlow and PyTorch. I am not aware between the primary differences between the two, but I do know that they are quite popular and have plenty of documentation.
Again, same technique as before. Go on YouTube, and look into courses about Machine Learning, here are a few:
A very, very, long tutorial (~25 hrs) in PyTorch: https://www.youtube.com/watch?v=V_xro1bcAuA
Shorter tutorial in PyTorch (~5hrs): https://www.youtube.com/watch?v=c36lUUr864M
A 2-part TensorFlow Course: https://www.youtube.com/watch?v=tpCFfeUEGs8 and https://www.youtube.com/watch?v=ZUKz4125WNI&t=0s
Also, I would like to add that along with PyTorch and TensorFlow, there is also Keras, but I believe it may not be as popular and/or well-documented as the other two
Also, these are just my suggestions for you to get started from the little knowledge I have about machine learning.
The common thing with beginning something, is that no matter what it is, it generally starts with scratching the surface of the basics, and just sitting down and watching tutorials for long hours until you get the hang of it
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a project written in IDL (interactive data language) that is used to produce a near real time model of the ionosphere by assimilating a heap of different data inputs. IDL is not a great language for this to be written in, but it is so mainly because of legacy code. The project is written in OO style despite the relatively limited object environment in IDL.
The scope of the next generation of this project is much larger and will require much more computing grunt. IDL has limited support from multi-threading and no support for parallel running on distributed memory systems. The original plan was to write the next generation code in C++ using MPI to parallelize, however I have recently started learning Python and am very impressed with the ease of use and ability to rapidly develop and maintain code. I am now considering writing the high level parts of this project in Python and using C extensions when/if required to improve the optimisation of the core number crunching parts.
Since I'm new to Python, it won't be immediately obvious to me where Python is likely to be slow compared to a C version (and I'll also probably do things sub-optimally in Python until I learn its idiosyncrasies). This means I'll thinking of basically planning out the whole project as if it was to be done all in Python, write the code, profile and optimise repeatedly until I can't make any more improvements and then look to replace the slowest parts with C extensions.
Is this a good approach? Does anyone have any tips for developing this kind of project? I'll be looking to utilise as many existing well optimised libraries as possible (such as scaLAPACK) which may also reduce the need to roll my own C based extensions for the number crunching.
Python is especially slow when you do a lot of looping, especially nested loops
for i in x:
for j in y:
....
When it comes computationally intensive problems, 99% of the problems can be solved by doing vectorized calculations with numpy instead of looping, e.g.:
x = np.arange(1000) #numbers from 0 to 999
y = np.arange(1000, 2000) #numbers from 1000 to 1999
# slow:
for i in range(len(x)):
y[i] += x[i]
# fast:
y += x
For many scientific problems there are binary libraries, written in FORTRAN or C(++), that are available in Python. This makes life really easy.
If you come to a point where this is not possible, I'd stick to Cython to easily implement the core parts in C, without writing C.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am a machine learning beginner. I'd like to learn the basics by teaching computers to play checkers. Actually, the games I want to learn are Domineering and Hex. My language of choice is Python
These games are pretty easy to store and the rules are much simpler than chess, but there aren't too many people who play. If I can get this idea off the ground it would be great for experimenting Combinatorial Game Theory to see if a computer and find the optimal move.
I found this old paper on checkers from the 1960's by a guy at IBM. Originally I had asked about neural networks, but they are saying it's the wrong tool.
EDIT: It could be that machine learning is not the right strategy. In that case, what goes wrong? and what is a better way?
You might want to take a look at the following: Chinook, Upper Confidence Trees, Reinforcement Learning, and Alpha-Beta pruning. I personally like to combine Alpha-Beta Pruning and Upper Confidence Trees (UCT) for perfect information games where each player has less than 10 reasonable moves. You can use Temporal Difference Learning to create a position evaluation function. Game AI is probably the most fun way to learn machine learning.
For links to all of these topics, click on
http://artent.net/blog/2012/09/26/checkers-and-machine-learning/
(I was not able to include more links because the stack overflow software considers me a newbie!)
Get the book called "Machine learning" by McGraw Hill and read the first chapter. It's extremely well written and the first chapter will teach you enough to make a program that plays checkers. Personally I made a program that plays 5 in a row on miniclip.com, also in python.
http://www.amazon.com/Learning-McGraw-Hill-International-Editions-Computer/dp/0071154671
When playing checkers, you seek to gain an advantage over your opponent by taking his or her pieces and crowning your own. Losing your pieces and allowing your opponent to crown his or her pieces is not desirable, so you avoid doing it.
Board game engines usually revolve around a position evaluation function. For checkers, my first guess would be something like this:
score = number of allies - number of opponents
+ 3 * number of crowned allies - 3 * number of crowned opponents
Given a board, this function will return the score of the board. The higher the score, the better your position. The lower the score, the worse your position.
To make a naive checkers "engine", all you need to do is find the best move given a board position, which is just searching through all immediate legal moves and finding the one that maximizes your score.
Your engine won't think ahead more than one move, but it will be able to play against you somewhat.
The next step would to give your engine the ability to plan ahead, which essentially is predicting your opponent's responses. To do that, just find your opponent's best move (here comes recursion) and subtract it from your score.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
For a paper I want to argue why I have used Python for the implementation of my algorithm. Besides the typical arguments that it is fast -using suitable libraries- and it is easy to implement the algorithm with it, I thought maybe there are some big HPC projects that are using it.
Does anyone know a famous project that uses Python for large parallel calculations, maybe with a paper which I can cite?
To be honest, as great a language as python is, it wouldn't be a suitable environment for scientific computing and in particular high performance computing, if those libraries weren't available. So you can see python as one pieces of a larger puzzle - much as MATLAB can be.
The two key reasons to use python for scientific or high-performance computing can then be said to be because of the convenient interfaces to software packages written in other languages, or because you need fast turn around on a project. Commonly, both issues arise at the time.
The classic example of this is the paper "Feeding a Large-scale Physics Application to Python", by David M. Beazley which combines performance intensive C++ with python using SWIG
If you're looking for something very current, there is a new paper, "A New Modelling System for Seasonal Streamflow Forecasting Service of the Bureau of Meteorology, Australia", by Daehyok Shin et al., that due to be presented at MODSIM2011. I saw the first author speak at the Melbourne Python Users Group about how ipython was used being used as a mechanism for bridging high performance fortran models and HDF5 data in such a way that even non-programmers could make effective contributions to a larger scientific program.
Check out the Python success stories page on Python.org.
Blender is written in Python which is quite impressive for what it can do. If you're not impressed by testing it, you should watch some of the shorts people have made using it. Not as impressive, Ubuntu Software Center and BitTorrent are written in Python. Battlefield 2 uses a good chunk of Python
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I need to build a heavy duty molecular dynamics simulator. I am wondering if python+numpy is a good choice. This will be used in production, so I wanted to start with a good language. I am wondering if I should rather start with a functional language like eg.scala. Do we have enough library support for scientific computation in scala? Or any other language/paradigm combination you think is good - and why. If you had actually built something in the past and are talking from experience, please mention it as it will help me with collecting data points.
thanks much!
The high performing MD implementations tend to be decidedly imperative (as opposed to functional) with big arrays of data trumping object-oriented design. I've worked with LAMMPS, and while it has its warts, it does get the job done. A perhaps more appealing option is HOOMD, which has been optimized from the beginning for Nvidia GPUs with CUDA. HOOMD doesn't have all the features of LAMMPS, but the interface seems a bit nicer (it's scriptable from Python) and it's very high performance.
I've actually implemented my own MD code a couple times (Java and Scala) using a high level object oriented design, and have found disappointing performance compared to the popular MD implementations that are heavily tuned and use C++/CUDA. These days, it seems few scientists write their own MD implementations, but it is useful to be able to modify existing ones.
I believe that most highly performant MD codes are written in native languages like Fortran, C or C++. Modern GPU programming techniques are also finding favour more recently.
A language like Python would allow for much more rapid development that native code. The flip side of that is that the performance is typically worse than for compiled native code.
A question for you. Why are you writing your own MD code? There are many many libraries out there. Can't you find one to suit your needs?
Why would you do this? There are many good, freely available, molecular dynamics packages out there you could use: LAMMPS, Gromacs, NAMD, HALMD all come immediately to mind (along with less freely available ones like CHARMM, AMBER, etc.) Modifying any of these to suit your purpose is going to be vastly easier than writing your own, and any of these packages, with thousands of users and dozens of contributors, are going to be better than whatever you'd write yourself.
Python+numpy is going to be fine for prototyping, but it's going to be vastly slower (yes, even with numpy linked against fast libraries) than C/C++/Fortran, which is what all the others use. Unless you're using GPU, in which case all the hard work is done in kernels written in C/C++ anyway.
Another alternative if you want to use Python is to take a look at OpenMM:
https://simtk.org/home/openmm
It's a Molecular Dynamics API that has many of the basic elements that you need (integrators, thermostats, barostats, etc) and supports running on the CPU via OpenCL and GPU via CUDA and OpenCL. It has a python wrapper that I've used before and basically mimics the underlying c-api calls. It's been incorporated into Gromacs, and MDLab, so you have some examples of how to integrate it if you're really dead set on building something from (semi) scratch
However as others have said, I highly recommend taking a look at NAMD, Gromacs, HOOMD, LAMMPS, DL_POLY, etc to see if it fits your needs before you embark on re-inventing the wheel.