Boost Your Productivity with These Python Tricks
Python Performance Calculator
Enter Your Code
Results
Why this matters: Using list comprehensions can reduce execution time by up to 30% for operations like list creation.
How many times have you stared at your Python code, knowing there’s a faster way-but you just don’t know what it is? You’re not alone. Even experienced developers waste hours repeating the same slow patterns because they never learned the small tricks that make Python fly. The truth is, you don’t need to rewrite your whole app to get a massive productivity boost. Just a few smart changes can cut your runtime in half, reduce your code by 40%, and make your scripts run smoother on your laptop or server.
Use List Comprehensions Instead of Loops
Every Python developer learns for loops early. But after a while, you start noticing how much boilerplate they add. Take this common pattern:
squares = []
for i in range(10):
squares.append(i ** 2)
That’s five lines of code to do one simple thing. Here’s the Python way:
squares = [i ** 2 for i in range(10)]
One line. Same result. And it’s faster. List comprehensions are optimized at the C level in CPython, so they run up to 30% quicker than equivalent loops. They’re also easier to read once you get used to them. Try replacing any for loop that builds a list with a comprehension. You’ll save time and reduce bugs.
And don’t forget dictionary comprehensions:
names = ['Alice', 'Bob', 'Charlie']
name_lengths = {name: len(name) for name in names}
That’s cleaner than initializing an empty dict and looping. And it’s less error-prone.
Use enumerate() When You Need Indexes
How often do you write code like this?
items = ['apple', 'banana', 'cherry']
for i in range(len(items)):
print(i, items[i])
That’s a red flag. You’re manually managing indexes when Python gives you a better tool: enumerate().
for index, item in enumerate(items):
print(index, item)
It’s cleaner, safer, and more readable. And if you need to start counting from 1 instead of 0? Just pass the start parameter:
for index, item in enumerate(items, start=1):
print(index, item)
This is especially useful when processing files, logs, or user input where line numbers matter. You’ll avoid off-by-one errors and make your code more maintainable.
Use set() for Fast Membership Tests
Checking if something exists in a list? That’s slow. Like, really slow. Here’s why:
fruits = ['apple', 'banana', 'cherry', 'date', 'elderberry']
if 'cherry' in fruits: # O(n) search
print('Found it!')
Python has to check every item until it finds a match. With 10,000 items? That’s 10,000 comparisons. But if you turn that list into a set:
fruits = {'apple', 'banana', 'cherry', 'date', 'elderberry'}
if 'cherry' in fruits: # O(1) lookup
print('Found it!')
Suddenly, it’s constant-time-no matter how big the set is. This matters when you’re filtering data, validating user input, or checking for duplicates. If you’re doing more than a few lookups, always convert your list to a set first.
And yes, sets are unordered. But if you don’t care about order and just need speed? That’s the trade-off.
Use pathlib for File Operations
Still using os.path.join() and open() with string paths? You’re working in 2010. Python 3.5+ introduced pathlib-a modern, object-oriented way to handle files and directories.
from pathlib import Path
# Old way
file_path = os.path.join('data', 'logs', 'app.log')
with open(file_path, 'r') as f:
content = f.read()
# New way
file_path = Path('data') / 'logs' / 'app.log'
content = file_path.read_text()
It’s shorter, more readable, and handles cross-platform paths automatically. No more worrying about backslashes on Windows or forward slashes on macOS. You can also check if a file exists:
if file_path.exists():
print("File is there")
Or list all .txt files in a folder:
for txt_file in Path('docs').glob('*.txt'):
print(txt_file.name)
It’s not just convenient-it’s less error-prone. And it’s built into Python. No imports needed beyond from pathlib import Path.
Use collections.defaultdict Instead of if not in
Counting things? Grouping data? You’ve probably written code like this:
counts = {}
for word in words:
if word in counts:
counts[word] += 1
else:
counts[word] = 1
It works. But it’s noisy. Here’s the cleaner version:
from collections import defaultdict
counts = defaultdict(int)
for word in words:
counts[word] += 1
No checks. No conditionals. Just add. defaultdict automatically creates a default value (like 0 for int) when a key doesn’t exist. You can use it for lists too:
groups = defaultdict(list)
for item in items:
groups[item.category].append(item)
That’s a common pattern in data processing. With defaultdict, you avoid cluttering your code with boilerplate checks.
Use itertools for Complex Iterations
Need to loop over two lists at once? Group items in chunks? Skip every other item? Python’s itertools module has tools for these exact cases.
Want to combine two lists into pairs?
from itertools import zip_longest
names = ['Alice', 'Bob']
ages = [25, 30, 35]
for name, age in zip_longest(names, ages, fillvalue='Unknown'):
print(f'{name} is {age}')
# Output:
# Alice is 25
# Bob is 30
# Unknown is 35
Need to group a list into chunks of 3?
from itertools import islice
def chunked(iterable, n):
iterator = iter(iterable)
while chunk := list(islice(iterator, n)):
yield chunk
numbers = list(range(10))
for group in chunked(numbers, 3):
print(group)
# Output:
# [0, 1, 2]
# [3, 4, 5]
# [6, 7, 8]
# [9]
These aren’t just neat tricks-they’re memory-efficient. Unlike creating new lists, itertools returns iterators. That means you can process millions of items without running out of RAM.
Use functools.lru_cache for Expensive Functions
Have a function that does heavy math, makes API calls, or reads files? If it’s called multiple times with the same inputs, you’re wasting time.
import requests
def get_weather(city):
response = requests.get(f"https://api.weather.com/{city}")
return response.json()
Every call hits the server. Even if you’re asking for the same city 10 times. That’s slow and rude to the API.
Add a decorator:
from functools import lru_cache
@lru_cache(maxsize=128)
def get_weather(city):
response = requests.get(f"https://api.weather.com/{city}")
return response.json()
Now, the first call runs normally. Every call after that returns the cached result. No extra network requests. No extra processing. It’s like a built-in memoization tool. Great for recursive functions, database queries, or any slow operation with repeated inputs.
Use __slots__ for Memory-Efficient Classes
Creating thousands of objects? Say you’re building a simulation with 10,000 user profiles. Each one has a name, age, and email. By default, Python gives each instance a dictionary to store attributes. That’s flexible-but expensive.
Here’s the old way:
class User:
def __init__(self, name, age, email):
self.name = name
self.age = age
self.email = email
Each User object uses about 400 bytes of memory. Now, add __slots__:
class User:
__slots__ = ['name', 'age', 'email']
def __init__(self, name, age, email):
self.name = name
self.age = age
self.email = email
Memory drops to about 80 bytes per object. That’s an 80% reduction. And attribute access gets faster too. You can’t add new attributes dynamically anymore-but if you know your structure upfront (and you usually do), this is a huge win.
Use it for data-heavy applications: simulations, game objects, logging systems, or any place where you’re creating tons of small objects.
Use type hints and mypy to Catch Bugs Early
Type hints aren’t just for big teams or enterprise apps. They help you write better code-even if you’re solo.
def calculate_tax(income: float, rate: float = 0.2) -> float:
return income * rate
Now you know exactly what this function expects and returns. Tools like mypy can scan your code and warn you if you pass a string to a function that expects a number. No runtime errors. No debugging nightmares.
It’s not magic. But it’s like having a quiet partner who catches your typos before you even run the code. And it makes your code way easier to read for others-or for your future self.
Use logging Instead of print()
How many times have you left a print() statement in production code? We’ve all done it. But print() is a debugging tool-not a logging tool.
Switch to Python’s built-in logging module:
import logging
logging.basicConfig(level=logging.INFO)
logging.info("User logged in: [email protected]")
logging.warning("Low disk space")
logging.error("Failed to connect to database")
You can control output levels. Turn off debug messages in production. Send logs to files. Pipe them to monitoring tools. It’s built into Python. No extra packages. Just better habits.
Use contextlib.suppress() to Clean Up Exception Handling
Ever written this?
try:
os.remove('temp.txt')
except FileNotFoundError:
pass
It’s fine. But it’s verbose. Here’s a cleaner way:
from contextlib import suppress
with suppress(FileNotFoundError):
os.remove('temp.txt')
No try, no except, no clutter. Just say: "I don’t care if this fails." Great for cleanup code, optional files, or when you’re okay with things not existing.
Final Tip: Profile Before You Optimize
Don’t guess where your code is slow. Measure it. Python has a built-in profiler:
import cProfile
def my_function():
# your code here
pass
cProfile.run('my_function()')
It tells you exactly which functions take the most time. You’ll often find that 20% of your code is responsible for 80% of the delay. Fix that, and you’ll see real gains. Don’t optimize the wrong thing.
These tricks aren’t magic. They’re just better ways to use Python the way it was meant to be used. Start with one. Then another. Soon, you’ll stop writing slow, clunky code-and start writing Python that feels fast, clean, and powerful.