Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've been taught that while both camelCase and under_scores are acceptable variable names I need to be consistent in my approach.
How consistent is consistent though? Is it acceptable and Pythonic to use both under certain circumstances?
E.g. could I use camelCase for variables in my main code and under_scores for those in my functions? or perhaps one for variables which have answers derived from my own functions and one for other functions?
Both of these could be done in a way that makes it easy for those reading it to understand and follow basic systematical rules does that alone make it okay to use both or am I expected to follow through with only one naming convention?
Example of using under_scores for variables whose answer is derived from a user defined function and camelCase for other variales.
# My function.
def reverse(variableCalledA):
variableNamedB = reverseVariableA(variableCalledA) # {= 235}.
return variableNamedB
# Main code.
variableCalledA = 532
**reversed_variable_called_b** = reverse(variableCalledA)
answer = variableCalledA - **reversed_variable_called_b**
print(answer)
P.S. If this is appropriate than is it something I should mention in a comment so other users know to look out for it?
P.S.S. Please alert me to any ways I could update/improve my question and future questions.
Naming conventions are there to increase code clarity and make it easier for many developers to work on the same code base.
As such, the answer to your question really depends on the situation. If you are working in a professional setting you should adhere to whatever naming convention the company uses. If there is no existing naming convention, you should push for one. Generally, any new Python project should adhere to the PEP8 style guide, unless there's a good reason not to (for example: years of legacy code that uses a different style guide).
No matter if you are working on a new project or on legacy code, my personal opinion is that mixing camelCase and under_scores is just not a good idea. The example you provided sounds reasonable, but it is not a convention that other developers would know about unless it was explained in comments.
For being consistent you can use under_scores instead of camelCase as the former is more readable than the latter. You can see one of the posts on naming-convention here.
Is it acceptable and Pythonic to use both under certain circumstances?
Yes it is acceptable and pythonic to use both but under only certain circumstances. You can check PEP-8 guid for devs.
For function names
Function names should be lowercase, with words separated by underscores as necessary to improve readability.
Certain circumstances include backward compatibility
mixedCase is allowed only in contexts where that's already the prevailing style (e.g. threading.py), to retain backwards compatibility.
For Class names
Class names should normally use the CapWords convention.
The naming convention for functions may be used instead in cases where the interface is documented and used primarily as a callable.
Concluding, if you are building on top of some library it is better to go with the library style. For pythonic conventions PEP-8 is there to guide devs.
Is the code for yourself or a project you are working on with others? I think it is best to follow the style standards of the team. This way your team can follow your code.
When you write for yourself. Use the system that makes sense to you so that when you are reading the code a year from now you don't have to struggle.
PEP8 makes sense. CapWords for classes, UPPER_CASE for constants and names_with_underscores for everything else.
I put excessive comments in my code. Even with variables fully spelled out comments still help.
For people like me with CRS (Can't Remember Shit) long variable names help me remember what the variable holds, when I go back and look at the code.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
As the title suggests, I'm interested in the best (perhaps the most Pythonic way) to structure a program which uses many global variables.
First of all, by "many", I mean some 30 variables (which may be dictionaries, floats or strings) which every module of my program needs to access. Now, there seem to be two ways to do this:
define the "global" variables in seperate modules
use an object oriented approach
The advantage of using an object oriented approach is that I can have many instances of some main class initialized, and perhaps compare different values (results of some analysis, for example) later on.
I already have a program written, but basically it breaks down to one class with some 30 or so attributes. Although it works fine, I'm aware this is a pretty messy way to do this.
So, basically, is I use OOP approach, I would perhaps need to break my main class down to a few subclasses, every one of which stores specific logically related variables.
Any suggestions are welcome.
P.S. Just to be concrete about what I'm trying to do: I have a FEM-solver which needs to store structure info, element and node data, analysis result data, etc. So, I'm dealing with a lot of data types most of which are connected in some way.
Unfortunately, as was hinted at in the comments, there is no "Pythonic" way to do this. Having a large number of global constants is just fine - many programs and libraries do this. But in the comments, you've specified that all of your globals are being modified.
You need to take your program's architecture back to the drawing board. Rethink the relationships between your program's entities (functions, classes, modules, etc). There has to be a better way to organize it.
And by the way, it also sounds like you're getting close to using the God Object Antipattern. Use some of the advice in this SO question to refactor your massive class that has it's fingers all over your program.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am sure this question was already answered couple of times, and i will soon close this topic, but i couldn't find it.
Is there a recommendet way of arranging member functions?
I am pretty sure not everything need rules, but wanted to if there are thoughts about this topic, which goes more then sorting by access level.
Example class in Pseudocode:
Class
amethod()
bmethod()
cvariable
avariable
bvariable
There are some recommendations, but they are rather vague. The only common point is, quoting the Google Java Style Guide, § 3.4.2,
that each class order its members in some logical order, which its
maintainer could explain if asked. For example, new methods are not just
habitually added to the end of the class, as that would yield "chronological
by date added" ordering, which is not a logical ordering.
The Google C++ Style Guide recommends to order by visibility (which is obvious for C++), then by type:
Typedefs and Enums
Constants (static const data members)
Constructors
Destructor
Methods, including static methods
Data Members (except static const data members)
Oracle’s Code Conventions for Java, § 3.1.3, recommend to order by type, then by visibility (for variables) or functionality (for methods):
Class (static) variables (ordered by visibility)
Instance variables (ordered by visibility)
Constructors
Methods (grouped by functionality rather than by scope or accessibility)
I´m not aware of any generally/widely accepted convention, other than "use something that makes sense" (eg. grouping variables instead of writing a wild mix of vars and functions).
However, it´s not only a style question:
In C++ (and C), the order maps directly to the memory layout, this can lead to different variable sizes because of alignment and padding. Additionally, if serializing something in a binary format, where which value is in the data is of course important (but serializing that way is not exactly good, because the memory layout depends...).
And, like #huu noted in the comments, the variable order determines the initialization order, this is important a member variables is initialized with the value of another member variable (of the same object). A mismatch in variable declaration order and initialization order will lead to a compiler error.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Sometimes, when looking at Python code examples, I'll come across one where the whole program is contained within its own class, and almost every function of the program is actually a method of that class apart from a 'main' function.
Because it's a fairly new concept to me, I can't easily find an example even though I've seen it before, so I hope someone understands what I am referring to.
I know how classes can be used outside of the rest of a program's functions, but what is the advantage of using them in this way compared with having functions on their own?
Also, can/should a separate module with no function calls be structured using a class in this way?
A module is preferred when it is a collection of pure functions i.e. no shared state like module level variables. A big class is often used when there are multiple functions operating on a shared state.
In Python scripts, you will often see the pattern of the main function being just the instantiation of a class and calling a method for e.g youtube-dl. This is done for various reasons:
Can instantiate multiple objects without mixing state. It is easier to make it threadsafe.
Classes can be inherited or composed (for e.g. see BaseHTTPRequestHandler
Classes have more features than modules like constructors, iteration support etc.
In general, classes offer more power with slight added complexity. Some people prefer functions for simplicity esp in the case of one-time scripts. The tradeoff is upto the developer and both are valid options in Python.
A program often has to maintain state and share resources between functions (command line options, dB connection, etc). When that's the case a class is usually a better solution (wrt/ readability, testability and overall maintainability) than having to pass the whole context to every function or (worse) using global state.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As Is it necessary or useful to inherit from python's object in Python 3.x? and Python class inherits object make clear, it is no longer necessary to inherit from object when defining a class in Python 3.
As a corollary to this which isn't directly addressed by either of the linked questions: should I prefer either style over the other when writing new Python 3 code? Is it better to drop the object base class in the interest of cleaner class definitions, or leave it in in order to (potentially) make future ports to Python 2 easier?
Programming is converting the abstract ideas into a formal form that is later used to produce executable. There is no need to make the process more complex than neccessary. The machines were created to help us. It does not apply in the opposite direction.
In some sense, Python 3 is a new language. Then the question should be: "Should I force myself to use the new language so that it would look the way the old tools/programmers were used to?"
It's time again to read the Zen of Python one by one:
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
pylint appears to still warn about not using object when defining a class in 3.x, so I also add it, even for 3.x-only code. I write a lot of Python that runs on both 2.x and 3.x; there it's a cinch that you want it.
But you make reasonable arguments for both sides, really. What do you think?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a solid understanding of OOP and its idioms in Java.
Now I am coding in python, and I am in a situation where having multiple inheritance may be useful, however (and this may be due to years of java code), i am reluctant to do it and I am considering using composition instead of inheritance in order to avoid potential conflicts with potential equal method names.
Question is, am i being to strict or too java focused regarding this thing. Or using multiple inheritance in python is not only possible but also encouraged.
Thanks for your time :)
The question of "inheritance vs. composition" comes down to an attempt to solve the problem of reusable code. You don't want to have duplicated code all over your code, since that's not clean and efficient. Inheritance solves this problem by creating a mechanism for you to have implied features in base classes. Composition solves this by giving you modules and the ability to simply call functions in other classes.
If both solutions solve the problem of reuse, then which one is appropriate in which situations? The answer is incredibly subjective, but I'll give you my three guidelines for when to do which:
Avoid multiple inheritance at all costs, as it's too complex to be useful reliably. If you're stuck with it, then be prepared to know the class hierarchy and spend time finding where everything is coming from.
Use composition to package up code into modules that is used in many different unrelated places and situations.
Use inheritance only when there are clearly related reusable pieces of code that fit under a single common concept, or if you have to because of something you're using.
However, do not be a slave to these rules. The thing to remember about object oriented programming is that it is entirely a social convention programmers have created to package and share code. Because it's a social convention, but one that's codified in Python, you may be forced to avoid these rules because of the people you work with. In that case, find out how they use things and then just adapt to the situation.
More details can be found on: http://learnpythonthehardway.org/book/ex44.html
I would still prefer composition to inheritance, whether multiple or single. Really getting into duck typing is a bit like having loads of implicit interfaces everywhere, so you don't even need inheritance (or abstract classes) very much at all in Python. But that's prefer composition, not never use inheritance. If inheritance (even multiple) is a good fit and composition isn't, then use inheritance.