By Pawal Jain
Every now and again, when I learn about a new feature in Python, or I notice that a few others are unaware of a feature, I make a note of it.
Over the last few weeks, there have been a few interesting features that I recently learned about, or realized others — on Stack Overflow
Here are ten neat Python development tricks some I’m sure you haven’t seen before. And a quick look at a few of these features, and a rundown of each.
Note: Codes are shown as images in this story. Further, you will get GitHub Readme link at the end to do experiments further 🤗
01. How to view source code in the running state?
Looking at the source code of the function, we usually use the IDE to complete.
For example, in PyCharm, you can use Ctrl + mouse
to enter the source code of the function.
What if there is no IDE?
- When we want to use a function, how do we know which parameters this function needs to receive?
- When we have problems when using functions, how to troubleshoot the problem by reading the source code?
At this time, we can use inspect
instead of IDE to help you accomplish these things

inspect.getsource: Return the text of the source code for an object.

The inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame ob
jects, and code objects
Four main kinds of services provided by this module:
- Type checking,
- Getting source code,
- Inspecting classes and functions
- Examining the interpreter stack.
02. The fastest way to view the package path
When you use import
to import a package or module, Python will look in some directories, and these directories have a priority order, normally we will use sys.path
to view.

Is there a faster way?
Here I want to introduce a more convenient method than the above, one line command can be solved

From the output, you can find that the path of this column will be more complete than sys.path
, which contains the directory of the user environment.
03. Write the nested for a loop as a single line
We often use the following nested for loop code

Here are just three for loops, in actual coding, there may be more layers.
Such code is poorly readable, and people do not want to write it, and there is a better way to write it.
Here I introduce a commonly used writing method, using the itertools library to achieve a more elegant and readable code.

04. How to use the print output log
Many people like to use print
to debug code and record the running process of the program.
However, the print will only output the content to the terminal, and cannot be persisted to the log file, which is not conducive to troubleshooting.
If you are keen to use print to debug code (although this is not the best practice), record the process of running the program, then the print usage described below may be useful to you.
Print it as a function in Python 3, because it can receive more parameters, so the function itself becomes more powerful
code shown as below:

05. How to quickly calculate the function running time
Calculate the running time of a function, you might do it like this

You see you wrote a few lines of code to calculate the running time of the function.
Is there a way to calculate this running time more conveniently? Yes by using a built-in module called timeit
Use it with just one line of code

The results are as follows
2
2
2
2
2
10.020059824
06. Use the built-in caching mechanism to improve efficiency
Caching is a method of storing quantitative data to meet the needs of subsequent acquisitions, and is designed to speed up data acquisition.
The data generation process may require operations such as calculation, regularization, and remote acquisition. If the same piece of data needs to be used multiple times, it will be a waste of time to regenerate each time.
Therefore, if the data obtained by operations such as computing or remote request is cached, the subsequent data acquisition requirements will be accelerated.
To achieve this requirement, Python 3.2+
provides us with a mechanism that can be easily implemented without requiring you to write such a logic code.
This mechanism is implemented in the lru_cache
decorator in the functool
module.

Parameter interpretation:
maxsize
: how many results of this function call can be cached at most, if None, there is no limit, when set to a power of 2, the performance is the besttyped
: If True, calls of different parameter types will be cached separately.
for example

The output is as follows, you can see that the second call does not execute the function body, but directly returns the result in the cache
calculating: 1 + 2
3
3
calculating: 2 + 3
5
The following is the classic Fibonacci sequence when you specify larger n
, there will be a lot of repeated calculations

The timeit
introduced in point 6 can now be used to test how much efficiency can be improved.
Without lru_cache
, the running time is 31 seconds

After using lru_cache
, the running speed is too fast, so I adjusted the value of n from 30 to 500, but even so, the running time is only 0.0004 seconds. The increase in speed is very significant.

07. Tips for executing code before the program exits
Using the built-in module atexit
, you can easily register and exit functions.
Wherever you cause the program to crash, it will execute those functions that you have registered. Examples are as follows

The results are as follows:

If the clean() function has parameters, you can call atexit.register
(clean_1
, parameter 1
, parameter 2
, parameter 3
=’xxx’) without using decorators.
Maybe you have other ways to deal with this kind of demand, but it is more elegant and convenient than not using atexit
, and it is easy to extend.
But using atexit
still has some limitations, such as:
- If the program is killed by a system signal that you have not processed, the registered function cannot be executed normally.
- If a serious Python internal error occurs, the function you registered cannot be executed normally.
- If you call it manually
os._exit()
, the function you registered cannot be executed normally.
08. How to turn off the exception association context?
When you are handling an exception, due to improper handling or other problems, when another exception is thrown, the exception thrown out will also carry the original exception information.
Read it again and you will surely understand it now 🙂
Just like this.

You can see two exception messages from the output

If an exception is thrown in an exception handler or finally
block, the exception mechanism will implicitly work by default to attach the previous exception as the __context__
attribute of the new exception.
This is the automatic correlation exception context that Python enables by default.
If you want to control this context yourself, you can add a from
keyword (from
will have a limitation that the second expression must be another exception class or instance.) to indicate which exception caused your new exception.

The output is as follows

Of course, you can also use the with_traceback()
method to set the __context__
attribute for exceptions, which can also display exception information better in the traceback.

Finally, if I want to completely turn off this mechanism of automatically associating exception contexts? what else can we do?
Can be used raise...from None
, from the following example, there is no original exception

09. Implement defer-like delayed calls
There is a mechanism for delaying calls in Golang. The keyword is defer
, as shown below

The call of myfunc
will be completed before the function returns, even if you write the call of myfunc
on the first line of the function, this is the delayed call. The output is as follows,
A
B
So is there such a mechanism in Python?
Of course, there are, but it is not as simple as Golang.
We can use the Python context managers to achieve this effect

The output is as follows
A
B
10. How to stream read large files
Using with…open…
we can read data from a file, which is a very familiar operation for all Python devs.
But if you use it improperly, it will also cause great trouble.
For example, when you use the read function, Python will load the contents of the file into memory all at once. If the file has 10 GB or more, then the memory that your computer will consume is very huge.

For this problem, you may think of using readline
as a generator to return line by line.

But if the content of this file is in one line, 10 GB per line you will still read all the contents at once.
The most elegant solution is to use the read method to specify that only the fixed size of the content is read at a time. For example, in the following code, only 8kb is returned at a time.

The above code has no problem in function, but the code still looks a bit bloated.
With partial
function and iter
function, you can optimize the code like this

To Sum up
- We can use
inspect
to view source code in the running state itertools.product
can be used in case of nested loops- Use
timeit
module overtime
to calculate running time of a function or piece of code - Use
functool.lru_cache
to speed up your code. It’s designed to speed up data acquisition - Use
atexit
module to register your functions so that wherever you cause the program to crash, it will execute those functions that you have registered - Read a large file by breaking it into fixed-size blocks
That’s it! Did you learn anything new? Or do you have another trick that you want to share? Please let me know in the comments!
Here is the link for GitHub Readmeto view and analyze each trick
Yeah! now enjoy just like minions 😋, we made it to the end. Hope you learn something new and have some basic idea about these efficient development tricks

Thanks for reading, We hope you’ve fuel up your python development skill and knowledge. To See more tutorials like this, Support by dropping your comments, likes and share with friends.
Stay connected and see you around👋🏻