I have a lot of automated test cases with Robot Framework and, consequently, I have more and more keywords. It's a bit difficult for me to bring order.
My question is if I can include my keywords in a library. If this is possible, how can I do it?
Thank you.
Marta
This is how you create libraries - Creating test libraries.
However, moving keywords into a library will not bring order to your system. You will only move the disorder to another place.
Keeping your test scripts maintainable deals for the most part in having a certain structure to your work. This applies to Robot Framework as much as it does to any other language.
In Robot Framework we use Resource Files to store keywords that we want to reuse across multiple Test Case files. Thought these links you should be able to learn more on how to do this. You can import Resource Files in a Resource file, so chaining them.
As for what to put in these files that is often a personal preference. However, typically adhering to development principles like DRY, Separation of Concerns and most importantly Common Sense works best.
I'd advise adhering to principles instead of to fixed structure. Separate Data from Process logic, abstract the UI from your Process logic and model your process logic as close to Business Processes as possible.
As for converting keywords to Python Code. If the logic in your resource files means you use a lot of keywords for a specific feature automation, then perhaps this makes sense. But keep in mind that for maintainability you will then be more heavily dependent on the Python skills in your organisation.
Related
# Context -- skip if you want to get right to the point
I've been building a rather complex web application in Python (Bottle/gevent/MongoDB). It is a RSVP system which allows several independent front-end instances with registration forms als well as back-end access with granular user permissions (those users are our clients). I now need to implement a flexible map-reduce engine to collect statistics on the registration data. A one-size-fits-all solution is impossible since the data gathered varies from instance to instance. I also want to keep this open for our more technically inclined clients.
# End of context
So I need to execute arbitrary strings of code (some kind of ad-hoc plugin - language doesn't matter) entered through a web interface. I've already learned that it's virtually impossible to properly sandbox Python, so that's no option.
As of now I've looked into Lua and found Lupa, Lunatic Python and Lupy, but all three of them allow access to parts of the Python runtime.
There's also PyExecJS and its various runtimes (V8, Node, SpiderMonkey), but I have no idea whether it poses any security risks.
Questions:
1. Does anyone know of another (more fitting) option?
2. To those familiar with any of the Lua bindings: Is it possible to make them completely safe without too much hassle?
3. To those familiar with PyExecJS: How secure is it? Also, what kind of performance should I expect for, say, calling a short mapping function 1000 times and then iterating over a 1000-item list?
Here are a few ways you can run untrusted code:
a docker container that runs the code, I would suggest checking codecube.io out, it does exactly what you want and you can learn more about the process here
using the libsandbox libraries but at the present time the documentation is pretty bad
PyPy’s sandboxing
Sneklang is strict subset of Python, that is safely evaluated in your provided scope.
It is limited by scope size, and by number of node evaluation steps and protects from infinite loops, stack overflows, and excessive memory usage.
There is an online sandbox as well: https://sneklang.functup.com
I've made this project specifically because I had the same requirements.
I've been reading about good practices regarding Django projects management. As I understand, is good to:
Split the project into multiple small applications with specific responsibilities.
Always code thinking in redistributable components.
The second point has become quite important to me since I usually work on more than one project. So whenever I can, I modularize my components into installable packages which I can later reuse.
The question is... to what extend is this a good practice? how should I handle very simple components which are also highly reusable by other applications?
An example would be a simple reusable templatetag, which may be 40~60 lines of code + tests. If it doesn't do any project-specific operations, I don't see it fitting on any of my project apps, but I also find it to be too small to have an application on its own. Is it?
I've been do django projects for 4 years. And all I've got from all projects is few ContextProcessors.
If it doesn't do any project-specific operations, I don't see it fitting on any of my project apps, but I also find it to be too small to have an application on its own. Is it?
Just look at this like it's always project-specified until you need it in another project. So my answer is:
Write what you want and how you want. If you need stuff in other place then you'll separate it from project. Don't optimize prematurely.
I am going to write configuration tool for my Ubuntu based system. Next I would like to write frontends (text, GUI and web). But it is the most complicated project I wanted to write and I am not sure about general architecture I should use.
At the current I have functions and classes for changing system config. But these functions will probably grow & change. #Abki gave me advice how to write interface for frontends. I am going to make base classes for this interface but I don't know how to connect it with backend and next with frontends. Probably I should use design patterns like fasade, wrapper or something else.
It looks like (without interface_to_backend layer):
I don't care about UI and functions to change system config now. But I don't know how to write middle layer so It would be easy to connect it with the rest and extend functionality i the future.
I need general ideas, design patterns, advices how to implement this in Python.
I'm not sure this is entirely appropriate for SO but I'm intrigued and so I'll bite. As a rubyist I can't help much with the Python but here is some opinion on pattens from my experience.
My initial suggestion is you should review a few of the contenders out there. Specifically I'd be looking at cfengine, chef and bcfg2. They each tell a different story but if I'd summarise I'd say:
Chef has a lovely dsl syntax but is let down by a complicated architecture
bcfg2 is written in python but seems to have an annoying tendency to use XML :(
cfengine has the strongest theoretical underpinnings in promise theory (which is v.interesting BTW) but is C based.
Wikipedia also provides a pretty impressive list of configuration management tools that you will find useful.
In regard to designing your own tool I'd suggest there are three principles you want to pursue:
Simplicity, the simpler you make this the better. Simple in terms of scope, configuration and use are all important.
You'll need a single way to store data, you need to be able to trace the choices as they are made and not trample other people's changes (especially in a team environment).
Security, most configuration management tools need root privileges at some point. So you need to make sure that users can trust the code they're running.
You could use Fabric with Python as described in the article Ubuntu Server Setup with Python Fabric
The Wikipedia article at Comparison of open source configuration management software has several other tools that use Python to do this.
I like the approach taken by SALT.
If you write the GUI, text/CLI, and Web interfaces using Python, they can all use the same Python module. That way a change in one interface transparently affects the others. Plus all of those are in Python's area of strength.
There comes a point where, in a relatively large sized project, one need to think about splitting the functionality into various functions, and then various modules, and then various packages. Sometimes across different source distributions (eg: extracting a common utility, such as optparser, into a separate project).
The question - how does one decide the parts to put in the same module, and the parts to put in a separate module? Same question for packages.
There's a classic paper by David Parnas called "On the criteria to be used in decomposing systems into modules". It's a classic (and has a certain age, so can be a little outdated).
Maybe you can start from there, a PDF is available here
http://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf
Take out a pen and piece of paper. Try to draw how your software interacts on a high level. Draw the different layers of the software etc. Group items by functionality and purpose, maybe even by what sort of technology they use. If your software has multiple abstraction layers, I would say to group them by that. On a high level, the elements of a specific layer all share the same general purpose. Now that you have your software in layers, you can divide these layers into different projects based on specific functionality or specialization.
As for a certain stage that you reach in which you should do this? I'd say when you have multiple people working on the code base or if you want to keep your project as modular as possible. Hopefully your code is modular enough to do this with. If you are unable to break apart your software on a high level, then your software is probably spaghetti code and you should look at refactoring it.
Hopefully that will give you something to work with.
See How many Python classes should I put in one file?
Sketch your overall set of class definitions.
Partition these class definitions into "modules".
Implement and test the modules separately from each other.
Knit the modules together to create your final application.
Note. It's almost impossible to decompose a working application that evolved organically. So don't do that.
Decompose your design early and often. Build separate modules. Integrate to build an application.
IMHO this should probably one of the things you do earlier in the development process. I have never worked on a large-scale project, but it would make sense that you make a roadmap of what's going to be done and where. (Not trying to rib you for asking about it like you made a mistake :D )
Modules are generally grouped somehow, by purpose or functionality. You could try each implementation of an interface, or other connections.
I sympathize with you. You are suffering from self-doubt. Don't worry. If you can speak any language, including your mother tongue, you are qualified to do modularization on your own. For evidence, you may read "The Language Instinct," or "The Math Instinct."
Look around, but not too much. You can learn a lot from them, but you can learn many bad things from them too.
Some projects/framework get a lot fo hype. Yet, some of their groupings of functionality, even names given to modules are misleading. They don't "reveal intention" of the programmers. They fail the "high cohesiveness" test.
Books are no better. Please apply 80/20 rule in your book selection. Even a good, very complete, well-researched book like Capers Jones' 2010 "Software Engineering Best Practices" is clueless. It says 10-man Agile/XP team would take 12 years to do Windows Vista or 25 years to do an ERP package! It says there is no method till 2009 for segmentation, its term for modularization. I don't think it will help you.
My point is: You must pick your model/reference/source of examples very carefully. Don't over-estimate famous names and under-estimate yourself.
Here is my help, proven in my experience.
It is a lot like deciding what attributes go to which DB table, what properties/methods go to which class/object etc? On a deeper level, it is a lot like arranging furniture at home, or books in a shelf. You have done such things already. Software is the same, no big deal!
Worry about "cohesion" first. e.g. Books (Leo Tolstoy, James Joyce, DE Lawrence) is choesive .(HTML, CSS, John Keats. jQuery, tinymce) is not. And there are many ways to arrange things. Even taxonomists are still in serious feuds over this.
Then worry about "coupling." Be "shy". "Don't talk to strangers." Don't be over-friendly. Try to make your package/DB table/class/object/module/bookshelf as self-contained, as independent as possible. Joel has talked about his admiration for the Excel team that abhor all external dependencies and that even built their own compiler.
Actually it varies for each project you create but here is an example:
core package contains modules that are your project cant live without. this may contain the main functionality of your application.
ui package contains modules that deals with the user interface. that is if you split the UI from your console.
This is just an example. and it would really you that would be deciding which and what to go where.
We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others.
Currently there are two paths being discussed:
Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer.
Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right.
This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background.
Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful.
Don't throw away your code!
It's the single worst mistake you can make (on a large codebase). See Things You Should Never Do, Part 1.
You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in.
For new functionality in your app, write it in C# and call it from your classic ASP. You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code.
You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called Wasabi and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that.
It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility.
Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement.
Use this as an opportunity to remove unused features! Definitely go with the new language. Call it 2.0. It will be a lot less work to rebuild the 80% of it that you really need.
Start by wiping your brain clean of the whole application. Sit down with a list of its overall goals, then decide which features are needed based on which ones are used. Then redesign it with those features in mind, and build.
(I love to delete code.)
It works out better than you'd believe.
Recently I did a large reverse-engineering job on a hideous old collection of C code. Function by function I reallocated the features that were still relevant into classes, wrote unit tests for the classes, and built up what looked like a replacement application. It had some of the original "logic flow" through the classes, and some classes were poorly designed [Mostly this was because of a subset of the global variables that was too hard to tease apart.]
It passed unit tests at the class level and at the overall application level. The legacy source was mostly used as a kind of "specification in C" to ferret out the really obscure business rules.
Last year, I wrote a project plan for replacing 30-year old COBOL. The customer was leaning toward Java. I prototyped the revised data model in Python using Django as part of the planning effort. I could demo the core transactions before I was done planning.
Note: It was quicker to build a the model and admin interface in Django than to plan the project as a whole.
Because of the "we need to use Java" mentality, the resulting project will be larger and more expensive than finishing the Django demo. With no real value to balance that cost.
Also, I did the same basic "prototype in Django" for a VB desktop application that needed to become a web application. I built the model in Django, loaded legacy data, and was up and running in a few weeks. I used that working prototype to specify the rest of the conversion effort.
Note: I had a working Django implementation (model and admin pages only) that I used to plan the rest of the effort.
The best part about doing this kind of prototyping in Django is that you can mess around with the model, unit tests and admin pages until you get it right. Once the model's right, you can spend the rest of your time fiddling around with the user interface until everyone's happy.
Whatever you do, see if you can manage to follow a plan where you do not have to port the application all in one big bang. It is tempting to throw it all away and start from scratch, but if you can manage to do it gradually the mistakes you do will not cost so much and cause so much panic.
Half a year ago I took over a large web application (fortunately already in Python) which had some major architectural deficiencies (templates and code mixed, code duplication, you name it...).
My plan is to eventually have the system respond to WSGI, but I am not there yet. I found the best way to do it, is in small steps. Over the last 6 month, code reuse has gone up and progress has accelerated.
General principles which have worked for me:
Throw away code which is not used or commented out
Throw away all comments which are not useful
Define a layer hierarchy (models, business logic, view/controller logic, display logic, etc.) of your application. This has not to be very clear cut architecture but rather should help you think about the various parts of your application and help you better categorize your code.
If something grossly violates this hierarchy, change the offending code. Move the code around, recode it at another place, etc. At the same time adjust the rest of your application to use this code instead of the old one. Throw the old one away if not used anymore.
Keep you APIs simple!
Progress can be painstakingly slow, but should be worth it.
I would not recommend JScript as that is definitely the road less traveled.
ASP.NET MVC is rapidly maturing, and I think that you could begin a migration to it, simultaneously ramping up on the ASP.NET MVC framework as its finalization comes through.
Another option would be to use something like ASP.NET w/Subsonic or NHibernate.
Don't try and go 2.0 ( more features then currently exists or scheduled) instead build your new platform with the intent of resolving the current issues with the code base (maintainability/speed/wtf) and go from there.
A good place to begin if you're considering the move to Python is to rewrite your administrator interface in Django. This will help you get some of the kinks worked out in terms of getting Python up and running with IIS (or to migrate it to Apache). Speaking of which, I recommend isapi-wsgi. It's by far the easiest way to get up and running with IIS.
I agree with Michael Pryor and Joel that it's almost always a better idea to continue evolving your existing code base rather than re-writing from scratch. There are typically opportunities to just re-write or re-factor certain components for performance or flexibility.