0% found this document useful (0 votes)
84 views21 pages

Pandas Library Stubs Error Guide

The document provides a comprehensive guide on Python best practices, covering essential topics such as code style, type hints, error handling, testing strategies, and performance optimization. It emphasizes adherence to PEP 8 for code consistency, the use of type hints for better documentation, and various testing methodologies including unit and integration testing. Additionally, it discusses performance optimization techniques, including profiling, caching, and efficient database operations.

Uploaded by

VIKSANT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views21 pages

Pandas Library Stubs Error Guide

The document provides a comprehensive guide on Python best practices, covering essential topics such as code style, type hints, error handling, testing strategies, and performance optimization. It emphasizes adherence to PEP 8 for code consistency, the use of type hints for better documentation, and various testing methodologies including unit and integration testing. Additionally, it discusses performance optimization techniques, including profiling, caching, and efficient database operations.

Uploaded by

VIKSANT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Python Best Practices - Comprehensive Guide

Table of Contents
1. Code Style and Conventions
2. Type Hints and Static Analysis
3. Error Handling and Exceptions
4. Testing Strategies
5. Performance Optimization
6. Async Programming
7. Database Operations
8. Security Best Practices
9. Package Management
10. Documentation Standards

1. Code Style and Conventions


PEP 8 Compliance
Python Enhancement Proposal 8 defines the style guide for Python code. Fol-
lowing PEP 8 ensures code consistency across projects and teams. Key conven-
tions include: 4 spaces for indentation (never tabs), maximum line length of 79
characters for code and 72 for comments, blank lines to separate functions and
classes, and imports at the top of files.
Naming conventions are crucial for readability. Variables and functions
use snake_case: user_count, calculate_total(). Classes use PascalCase:
UserManager, PaymentProcessor. Constants use UPPER_SNAKE_CASE:
MAX_CONNECTIONS, DEFAULT_TIMEOUT. Private attributes prefix with underscore:
_internal_cache.

Import Organization
Imports should be grouped and ordered: standard library imports first, related
third-party imports second, local application imports third. Within each group,
imports are alphabetically sorted. Absolute imports are preferred over relative
imports for clarity.
Example import structure:
import os
import sys
from typing import Dict, List, Optional

import numpy as np
import pandas as pd
from fastapi import FastAPI, HTTPException

1
from [Link] import User
from [Link] import UserService

Code Structure
Modules should have a logical structure: module docstring first, then imports,
then constants, then functions and classes. Related functionality is grouped
together. Each module focuses on a single responsibility.
Functions should be small and focused, doing one thing well. Function length
is typically under 50 lines. Long functions are refactored into smaller helper
functions. Each function has a clear purpose expressible in a single sentence.
Classes encapsulate related data and behavior. Methods are similarly focused.
Property decorators provide controlled attribute access. Class methods are used
for alternative constructors. Static methods contain utility functions that don’t
need instance or class data.

Code Comments and Docstrings


Comments explain why code does something, not what it does (the code itself
shows what). Comments are complete sentences with proper capitalization and
punctuation. Inline comments are used sparingly for particularly tricky code.
Docstrings document all public modules, functions, classes, and methods. We
follow Google or NumPy docstring conventions for consistency. Docstrings in-
clude: summary line, detailed description, parameters with types, return value
with type, raised exceptions, and usage examples.
Example docstring:
def calculate_discount(price: float, discount_percent: float) -> float:
"""Calculate final price after applying discount.

Args:
price: Original price before discount
discount_percent: Discount percentage (0-100)

Returns:
Final price after discount applied

Raises:
ValueError: If discount_percent is not between 0 and 100

Example:
>>> calculate_discount(100.0, 20.0)
80.0
"""

2
if not 0 <= discount_percent <= 100:
raise ValueError("Discount percent must be between 0 and 100")
return price * (1 - discount_percent / 100)

List Comprehensions and Generator Expressions


List comprehensions provide concise syntax for creating lists: [x*2 for x in
range(10)]. They’re more readable than equivalent for loops for simple trans-
formations. Comprehensions can include filtering: [x for x in data if x >
0].
Generator expressions are similar but create iterators instead of lists: (x*2
for x in range(10)). They’re memory-efficient for large datasets since they
generate values lazily rather than creating entire lists in memory.
Dictionary and set comprehensions follow similar patterns: {key: value for
key, value in items}, {x for x in data if x > 0}. These are preferred
over loops with append/add operations for building collections.
However, comprehensions should remain readable. Complex multi-line compre-
hensions with multiple conditions are better expressed as explicit loops. Clarity
trumps brevity.

Context Managers
Context managers handle resource acquisition and release reliably. The with
statement ensures cleanup code runs even if exceptions occur. This is essential
for files, database connections, locks, and other resources requiring cleanup.
with open('[Link]', 'r') as f:
content = [Link]()
# File automatically closed here, even if exception occurred
Custom context managers are created using __enter__ and __exit__ methods
or the [Link] decorator. They encapsulate setup and
teardown logic.

2. Type Hints and Static Analysis


Type Hint Basics
Type hints specify expected types for variables, parameters, and return values.
They improve code documentation, enable better IDE support, and allow static
type checking. Type hints are optional but highly recommended for public APIs.
def greet(name: str) -> str:
return f"Hello, {name}!"

3
user_count: int = 0
prices: List[float] = [10.99, 20.50, 15.00]

Generic Types
The typing module provides generic types for collections: List[int] for
list of integers, Dict[str, int] for dictionary mapping strings to integers,
Optional[str] for string or None, Union[int, float] for int or float.
from typing import Dict, List, Optional, Union

def process_users(users: List[Dict[str, Union[str, int]]]) -> Optional[str]:


if not users:
return None
return users[0]['name']

Type Aliases
Complex types can be aliased for readability and reusability. Type aliases are
simple assignments, conventionally using PascalCase.
UserId = int
UserDict = Dict[str, Union[str, int]]
UserList = List[UserDict]

def get_user(user_id: UserId) -> UserDict:


...

Protocol and TypedDict


Protocols define structural subtyping (duck typing with type checking). Classes
implementing protocol methods satisfy the type without explicit inheritance.
from typing import Protocol

class Drawable(Protocol):
def draw(self) -> None:
...

def render(obj: Drawable) -> None:


[Link]()
TypedDict provides type hints for dictionaries with specific keys:
from typing import TypedDict

class User(TypedDict):
id: int
name: str

4
email: str

def create_user(user: User) -> None:


...

Mypy Static Type Checking


Mypy is a static type checker that analyzes code without running it. It catches
type errors before runtime. Mypy is integrated into development workflows,
running on every commit.
Configuration in [Link] controls checking strictness. Strict mode catches
more issues but requires more complete type annotations. Incremental mode
checks only changed files for faster feedback.
Type stubs provide type information for untyped libraries. The typeshed
project maintains stubs for the standard library and popular packages. Cus-
tom stubs can be created for proprietary libraries.

Gradual Typing
Type hints can be added incrementally to existing codebases. Start with pub-
lic APIs and gradually add types to internal functions. The # type: ignore
comment suppresses type checking for specific lines when necessary.
Generic Any type opts out of type checking for specific values. It’s used sparingly,
typically for dynamic data or when types are truly unknown. Overusing Any
defeats the purpose of type hints.

3. Error Handling and Exceptions


Exception Hierarchy
Python’s exception hierarchy starts with BaseException. Most custom excep-
tions inherit from Exception. Never catch BaseException as it includes Syste-
mExit and KeyboardInterrupt which shouldn’t be silenced.
Built-in exceptions cover common errors: ValueError for invalid values, Type-
Error for wrong types, KeyError for missing dictionary keys, AttributeError for
missing attributes, OSError for system errors.

Custom Exceptions
Custom exceptions communicate domain-specific errors. They inherit from ap-
propriate base exceptions and add relevant attributes.
class InsufficientFundsError(Exception):
def __init__(self, balance: float, amount: float):

5
[Link] = balance
[Link] = amount
super().__init__(f"Insufficient funds: balance={balance}, amount={amount}")

Exception Handling Patterns


Catch specific exceptions rather than bare except:. Handle only exceptions
you can meaningfully recover from. Let unexpected exceptions propagate to be
logged at higher levels.
try:
result = risky_operation()
except ValueError as e:
[Link](f"Invalid value: {e}")
result = default_value
except KeyError as e:
[Link](f"Missing key: {e}")
raise
The else clause runs if no exception occurred. It’s cleaner than putting success
logic after the try block. The finally clause always runs for cleanup, even if
exceptions occur or the function returns early.
try:
f = open('[Link]')
except FileNotFoundError:
[Link]("File not found")
else:
content = [Link]()
finally:
if 'f' in locals():
[Link]()

Raising Exceptions
Raise exceptions for error conditions that callers should handle. Include descrip-
tive error messages with relevant context. Chain exceptions to preserve the orig-
inal error context using raise NewException() from original_exception.
def withdraw(account: Account, amount: float) -> None:
try:
if [Link] < amount:
raise InsufficientFundsError([Link], amount)
[Link] -= amount
except DatabaseError as e:
raise WithdrawalError("Failed to process withdrawal") from e

6
EAFP vs LBYL
Python favors EAFP (Easier to Ask for Forgiveness than Permission): try the
operation and handle exceptions if they occur. This is more Pythonic than
LBYL (Look Before You Leap): checking conditions before operations.
EAFP:
try:
value = dictionary[key]
except KeyError:
value = default
LBYL:
if key in dictionary:
value = dictionary[key]
else:
value = default
EAFP is often cleaner and avoids race conditions where checked conditions
change before use.

Logging Exceptions
Exceptions are logged with full context for debugging. The [Link]()
method automatically includes the exception traceback. Sensitive data is
redacted from exception messages and logs.
try:
process_payment(user_id, amount)
except PaymentError:
[Link](f"Payment failed for user {user_id}")
raise

4. Testing Strategies
Unit Testing
Unit tests verify individual functions and methods in isolation. Dependencies
are mocked to focus tests on the unit under test. Tests are fast, running in
milliseconds.
import unittest
from [Link] import Mock, patch

class TestUserService([Link]):
def setUp(self):
[Link] = Mock()

7
[Link] = UserService([Link])

def test_get_user_success(self):
[Link].return_value = {'id': 1, 'name': 'Alice'}
user = [Link].get_user(1)
[Link](user['name'], 'Alice')

def test_get_user_not_found(self):
[Link].return_value = None
with [Link](UserNotFoundError):
[Link].get_user(999)

Pytest Framework
Pytest provides a more Pythonic testing framework than unittest. Tests are
simple functions, assertions use plain assert statements, and fixtures provide
test dependencies.
import pytest

@[Link]
def user_service(mocker):
db = [Link]()
return UserService(db)

def test_get_user(user_service):
user_service.[Link].return_value = {'id': 1, 'name': 'Alice'}
user = user_service.get_user(1)
assert user['name'] == 'Alice'
Pytest marks categorize tests: @[Link] for slow tests, @[Link]
for integration tests. Marks enable running test subsets: pytest -m "not
slow".

Test Coverage
Code coverage measures what percentage of code is executed by tests. Coverage
tools like [Link] identify untested code. We aim for 80%+ coverage for
critical code paths.
coverage run -m pytest
coverage report
coverage html # Generate HTML report
Coverage isn’t a perfect metric: 100% coverage doesn’t guarantee correct behav-
ior, but low coverage definitely indicates insufficient testing. Focus on testing
important behaviors rather than hitting coverage targets.

8
Mocking and Patching
Mocks replace dependencies with controllable test doubles. This isolates units
and avoids external dependencies (databases, APIs, file systems) in unit tests.
from [Link] import Mock, patch

def test_send_email():
with patch('[Link].smtp_client') as mock_smtp:
send_welcome_email('user@[Link]')
mock_smtp.send.assert_called_once()
Mocks can be configured to return specific values, raise exceptions, or record
calls for assertions. Over-mocking makes tests brittle, testing implementation
rather than behavior. Mock only external boundaries.

Parametrized Tests
Parametrized tests run the same test logic with different inputs. This thoroughly
tests edge cases and boundary conditions without duplicating test code.
@[Link]("input,expected", [
(0, 0),
(1, 1),
(2, 4),
(3, 9),
(-1, 1),
])
def test_square(input, expected):
assert square(input) == expected

Integration Testing
Integration tests verify components working together. They use real dependen-
cies (test databases, queues) rather than mocks. They’re slower than unit tests
but catch integration issues.
Integration tests run in isolated environments: test databases are created, pop-
ulated, tested, and destroyed for each test run. This ensures clean state and
test independence.

Test Data Management


Test fixtures provide consistent test data. Fixtures can be functions returning
data, classes, or database records. They’re defined centrally and reused across
tests.
Factory patterns generate test data with sensible defaults and customizable
fields:

9
def create_user(name='Test User', email='test@[Link]', **kwargs):
return User(name=name, email=email, **kwargs)

def test_user_creation():
user = create_user(name='Alice')
assert [Link] == 'Alice'
assert [Link] == 'test@[Link]'

Continuous Testing
Tests run automatically on every commit via CI/CD pipelines. Fast unit tests
run first, providing quick feedback. Slower integration tests run after unit tests
pass. Tests must pass before merging code.

5. Performance Optimization
Profiling
Profiling identifies performance bottlenecks. The cProfile module profiles func-
tion calls, showing time spent in each function. Profile data is analyzed to find
hot spots.
import cProfile
import pstats

profiler = [Link]()
[Link]()
expensive_operation()
[Link]()

stats = [Link](profiler)
stats.sort_stats('cumulative')
stats.print_stats(10) # Top 10 functions
Line profilers like line_profiler show time spent on individual lines within
functions, identifying exact bottlenecks.

Algorithmic Optimization
The biggest performance gains come from algorithmic improvements. Reducing
complexity from O(n²) to O(n log n) dramatically improves performance for
large inputs.
Data structure choice impacts performance: lists for sequential access, sets for
membership testing, dictionaries for key-value lookups. Using appropriate data
structures avoids unnecessary computation.

10
Caching
Caching stores computation results to avoid repeating expensive operations.
The functools.lru_cache decorator implements Least Recently Used caching:
from functools import lru_cache

@lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n-1) + fibonacci(n-2)
Cache size is tuned based on memory constraints and hit rates. Caches are
invalidated when underlying data changes.

Generator Functions
Generators produce values lazily, one at a time, rather than creating entire
sequences in memory. They’re memory-efficient for large or infinite sequences.
def read_large_file(file_path):
with open(file_path) as f:
for line in f:
yield [Link]()

# Processes file line by line without loading entirely into memory


for line in read_large_file('[Link]'):
process(line)

String Concatenation
Repeated string concatenation using + is inefficient since strings are immutable.
Each concatenation creates a new string, copying all previous characters.
# Inefficient
result = ""
for item in items:
result += str(item)

# Efficient
result = "".join(str(item) for item in items)

Database Query Optimization


Database queries are often performance bottlenecks. N+1 query problems occur
when querying in loops. Bulk operations and eager loading prevent this.
# N+1 queries - inefficient
users = [Link]()

11
for user in users:
print([Link]) # Separate query per user

# Eager loading - efficient


users = [Link](joinedload([Link])).all()
for user in users:
print([Link]) # Loaded in single query
Indexes on frequently queried columns dramatically improve query performance.
Query execution plans identify missing indexes and optimization opportunities.

Multiprocessing
The Global Interpreter Lock (GIL) prevents true parallelism with threads for
CPU-bound tasks. Multiprocessing spawns separate processes, each with its
own GIL, enabling parallel execution.
from multiprocessing import Pool

def process_item(item):
return expensive_computation(item)

with Pool(processes=4) as pool:


results = [Link](process_item, items)
Multiprocessing has overhead: spawning processes and passing data between
them. It’s beneficial for CPU-intensive tasks but not for I/O-bound tasks where
asyncio is better.

6. Async Programming
Async/Await Basics
Async programming enables concurrent I/O operations without threading over-
head. Async functions are defined with async def and called with await.
import asyncio

async def fetch_data(url):


async with [Link]() as session:
async with [Link](url) as response:
return await [Link]()

async def main():


data = await fetch_data('[Link]
print(data)

12
[Link](main())

Event Loop
The event loop manages async task execution. It schedules tasks, handles I/O,
and switches between tasks when they await. Each program has one event loop
per thread.
Tasks run concurrently within the event loop. When a task awaits an I/O
operation, the loop switches to another task. This achieves concurrency without
parallelism.

Asyncio Patterns
Multiple async operations run concurrently using [Link]():
results = await [Link](
fetch_data(url1),
fetch_data(url2),
fetch_data(url3),
)
Timeouts prevent operations from hanging indefinitely:
try:
result = await asyncio.wait_for(slow_operation(), timeout=5.0)
except [Link]:
print("Operation timed out")

Async Context Managers


Async context managers handle async resource acquisition and cleanup. They
use async with and implement __aenter__ and __aexit__.
class AsyncDatabaseConnection:
async def __aenter__(self):
[Link] = await connect_to_database()
return [Link]

async def __aexit__(self, exc_type, exc_val, exc_tb):


await [Link]()

async with AsyncDatabaseConnection() as conn:


data = await [Link]("SELECT * FROM users")

Async Generators
Async generators yield values asynchronously. They’re iterated with async for.

13
async def fetch_pages(url_list):
for url in url_list:
data = await fetch_data(url)
yield data

async for page in fetch_pages(urls):


process(page)

Mixing Sync and Async


Blocking synchronous code in async functions blocks the entire event loop. CPU-
intensive or blocking I/O operations run in thread or process pools:
import asyncio
from [Link] import ThreadPoolExecutor

async def async_wrapper():


loop = asyncio.get_event_loop()
with ThreadPoolExecutor() as pool:
result = await loop.run_in_executor(pool, blocking_function)
return result
Async libraries exist for most I/O operations: aiohttp for HTTP, aiopg for
PostgreSQL, aioredis for Redis. Using sync libraries in async code negates
benefits.

7. Database Operations
Database Connection Pooling
Connection pools maintain reusable database connections. Creating connections
is expensive; pooling amortizes this cost across many requests.
from sqlalchemy import create_engine
from [Link] import QueuePool

engine = create_engine(
'postgresql://user:pass@localhost/db',
poolclass=QueuePool,
pool_size=10,
max_overflow=20,
)
Pool size is tuned based on concurrency: too small causes contention, too large
exhausts database connections. Connections are validated before use to detect
failures.

14
ORM Best Practices
Object-Relational Mapping libraries like SQLAlchemy map database tables to
Python classes. They provide abstraction over SQL, handle connection manage-
ment, and prevent SQL injection.
Models are defined as classes:
from sqlalchemy import Column, Integer, String
from [Link] import declarative_base

Base = declarative_base()

class User(Base):
__tablename__ = 'users'

id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False)
email = Column(String(100), unique=True, nullable=False)
Queries use the ORM query API:
# Select
users = [Link](User).filter([Link]('A%')).all()

# Insert
new_user = User(name='Alice', email='alice@[Link]')
[Link](new_user)
[Link]()

# Update
user = [Link](User).filter_by(id=1).first()
[Link] = 'newemail@[Link]'
[Link]()

# Delete
[Link](User).filter_by(id=1).delete()
[Link]()

Raw SQL Queries


ORMs add overhead and sometimes produce suboptimal queries. For
performance-critical paths, raw SQL provides full control.
result = [Link](
"SELECT * FROM users WHERE email = :email",
email=user_email
)
users = [Link]()

15
Parameterized queries prevent SQL injection. Never use string formatting to
construct SQL with user input.

Database Migrations
Schema changes are managed through migrations: versioned scripts that modify
database schema. Tools like Alembic track applied migrations and apply new
ones.
# Migration: Add email column
def upgrade():
op.add_column('users', [Link]('email', [Link](100)))

def downgrade():
op.drop_column('users', 'email')
Migrations are tested in development, applied to staging for validation, then
applied to production. Rollback procedures handle migration failures.

Transaction Management
Transactions group operations into atomic units: all succeed or all fail. This
maintains data consistency.
with [Link]():
user = User(name='Alice')
[Link](user)
account = Account(user_id=[Link], balance=100)
[Link](account)
# Transaction commits here if no exception, rolls back if exception
Transactions use appropriate isolation levels. Read Committed prevents dirty
reads. Repeatable Read prevents non-repeatable reads. Serializable prevents all
anomalies but has performance cost.

Database Indexing
Indexes dramatically speed up queries on indexed columns. They’re created on
columns frequently used in WHERE clauses, JOIN conditions, and ORDER BY
clauses.
from sqlalchemy import Index

Index('idx_user_email', [Link])
Indexes have costs: storage overhead and slower writes (indexes must be up-
dated). Unused indexes should be removed. Index usage is monitored through
database analytics.

16
8. Security Best Practices
Input Validation
All user input is validated before processing. Never trust client-side validation
alone; always validate server-side. Use allowlists (accepting known-good input)
rather than denylists (rejecting known-bad input).
from pydantic import BaseModel, EmailStr, constr

class UserCreate(BaseModel):
name: constr(min_length=1, max_length=100)
email: EmailStr
age: int = Field(gt=0, lt=150)
Pydantic validates input against schemas automatically, raising exceptions for
invalid data.

SQL Injection Prevention


Use parameterized queries exclusively. Never concatenate user input into SQL
strings.
# VULNERABLE - Never do this
query = f"SELECT * FROM users WHERE email = '{user_email}'"

# SAFE - Always do this


query = "SELECT * FROM users WHERE email = :email"
result = [Link](query, {'email': user_email})
ORMs automatically use parameterized queries, providing built-in SQL injec-
tion protection.

Password Security
Never store passwords in plain text. Hash passwords using strong algorithms
like bcrypt or Argon2. Hashes are one-way: you can’t recover the password,
only verify if a provided password matches.
from [Link] import CryptContext

pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")

# Hashing
hashed = pwd_context.hash("user_password")

# Verification
is_valid = pwd_context.verify("user_password", hashed)

17
Password policies enforce minimum complexity: length requirements, character
variety, no common passwords. Failed login attempts are rate-limited to prevent
brute force attacks.

Secrets Management
Never hardcode secrets (API keys, passwords, tokens) in code. Store secrets in
environment variables or dedicated secret management systems.
import os

DATABASE_URL = [Link]('DATABASE_URL')
API_KEY = [Link]('API_KEY')
Secret management systems like HashiCorp Vault provide encrypted storage,
access control, and secret rotation. Secrets are fetched at runtime and never
committed to version control.

Dependency Security
Third-party dependencies can contain vulnerabilities. Tools like safety scan
dependencies against vulnerability databases.
pip install safety
safety check
Dependencies are regularly updated to patch vulnerabilities. Automated tools
create pull requests for dependency updates. Critical security updates are ap-
plied immediately.

HTTPS Enforcement
All production traffic uses HTTPS, encrypting data in transit. HTTP requests
are automatically redirected to HTTPS. HSTS headers prevent downgrade at-
tacks.
from fastapi import FastAPI
from [Link] import HTTPSRedirectMiddleware

app = FastAPI()
app.add_middleware(HTTPSRedirectMiddleware)
Certificates are obtained from trusted certificate authorities and renewed auto-
matically. Certificate pinning can prevent man-in-the-middle attacks for mobile
applications.

18
9. Package Management
Virtual Environments
Virtual environments isolate project dependencies. Each project has its own
environment with specific package versions.
python -m venv .venv
source .venv/bin/activate # Linux/Mac
.venv\Scripts\activate # Windows
Requirements are tracked in [Link]:
pip freeze > [Link]
pip install -r [Link]

Dependency Pinning
Pinned dependencies specify exact versions: requests==2.28.1. This ensures
reproducible builds but requires manual updates.
Version ranges allow compatible updates: requests>=2.28,<3.0. This gets
bug fixes automatically but risks breaking changes.
Poetry and Pipenv provide deterministic dependency resolution with lock files
that pin all transitive dependencies.

Package Distribution
Packages are distributed via PyPI (Python Package Index). Projects define
metadata in [Link] or [Link]:
[[Link]]
name = "mypackage"
version = "0.1.0"
description = "My awesome package"

[[Link]]
python = "^3.9"
requests = "^2.28"
Semantic versioning communicates change impact: [Link].
Major versions break compatibility, minor versions add features, patch versions
fix bugs.

10. Documentation Standards


Module Docstrings
Every module starts with a docstring describing its purpose and contents:

19
"""User management services.

This module provides services for user operations including:


- User creation and validation
- Password hashing and verification
- User profile management

Example:
service = UserService(db)
user = service.create_user(name="Alice", email="alice@[Link]")
"""

Function Documentation
Functions document parameters, return values, exceptions, and provide exam-
ples:
def calculate_tax(income: float, rate: float) -> float:
"""Calculate tax amount based on income and rate.

Args:
income: Gross income amount
rate: Tax rate as decimal (e.g., 0.25 for 25%)

Returns:
Tax amount to be paid

Raises:
ValueError: If income is negative or rate is not between 0 and 1

Example:
>>> calculate_tax(50000, 0.25)
12500.0
"""

API Documentation
Public APIs are documented comprehensively. FastAPI automatically generates
OpenAPI documentation from type hints and docstrings:
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

app = FastAPI(title="My API", version="1.0.0")

class User(BaseModel):
"""User model."""

20
name: str
email: str

@[Link]("/users/", response_model=User)
def create_user(user: User):
"""Create a new user.

Args:
user: User data

Returns:
Created user object

Raises:
HTTPException: If email already exists
"""
return user
Documentation is hosted and versioned alongside code. It’s reviewed during
code review to ensure accuracy. Outdated documentation is worse than no
documentation.

This comprehensive guide covers Python best practices across all areas of de-
velopment. Following these practices produces maintainable, secure, and per-
formant code. Regular code reviews and continuous learning ensure practices
evolve with the language and ecosystem.

21

You might also like