0% found this document useful (0 votes)
13 views26 pages

Software Engineer Concepts - 4030afdb-00a4-4f83-A520 - 241007 - 202416

Uploaded by

messir00000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views26 pages

Software Engineer Concepts - 4030afdb-00a4-4f83-A520 - 241007 - 202416

Uploaded by

messir00000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Object Oriented Programming

● Object
○ Entity having properties as attributes and methods/procedures. Instance of a
class.
● Class
○ Defines the data format, attributes and provides implementation of methods
● Composition
○ Objects containing other objects in their instance variables. Used to represent
has-a relationship
● Inheritance
○ Is-a-type of relationship. Subclass Inherits attributes and code from parent or
superclass.
○ Abstract classes cannot be instantiated into objects
○ Multiple inheritance refers to inheritance of multiple classes and is allowed in
some languages. Suffers from the diamond problem. Method Resolution Order
(MRO) based on C3 linearization, in python resolves the diamond problem
● Polymorphism
○ Providing single interface/symbol to represent multiple different types
○ Types
■ Parametric Polymorphism (Templates in C++)
● Using parametric polymorphism, a function or a data type can be
written generically so that it can handle values identically without
depending on their type. Allows static type safety.
■ Subtyping
● Subtyping is a form of type polymorphism in which a subtype is a
data type that is related to another datatype (the supertype) by
some notion of substitutability, meaning that program elements,
typically subroutines or functions, written to operate on elements
of the supertype can also operate on elements of the subtype.
■ Row Polymorphism (Duck typing in python)

○ Static Polymorphism
○ Dynamic Polymorphism

● Encapsulation/ Data Abstraction (getters/setters)


○ Data Abstraction is a design pattern in which data are visible only to semantically
related functions, so as to prevent misuse. The success of data abstraction leads
to frequent incorporation of data hiding as a design principle in object oriented
and pure functional programming.

● Operator Overloading, Static methods, Friend functions, Magic methods


○ A static method (or static function) is a method defined as a member of an object
but is accessible directly from an API object's constructor, rather than from an
object instance created via the constructor.

● Abstract Class
● Class Interfaces

Data Structures
Listing all important data structures required to cover for interviews.

● Arrays
● Linked List
● Binary Search Tree
● Balanced Binary Search Tree (AVL or red black)
● Heap (Min / Max heap)
● Hash table
● Abstract data types: Tree, Graph, Stacks, Queues, Priority Queues, List, Set, Tuple, Map

Advanced Data Structures

● Probabilistic data structures: Skip lists, Bloom filters

Algorithms

Mentioning some of the popular algorithmic techniques. The list of algorithms is only for
academic and reference purposes. It is not important to memorize the algorithms, rather focus
should be on understanding the technique and time complexity used in the algorithm.

Time and Space Complexity Analysis

Graph Algorithms
● Breadth First Search (BFS)
● Depth First Search (DFS)
● Prim's algorithm (Also Greedy Algorithm)
● Kruskal's algorithm (Also Greedy Algorithm)
● Dijkstra Algorithm (Also Greedy and Dynamic)

Divide and Conquer


● Binary Search
● Merge Sort
● Closest Pair of Points

Greedy Approach
● Interval Scheduling
● Fractional Knapsack
● Event Selection
● Dijkstra Algorithm

Dynamic Programming
● Cut rod
● Edit Distance
● Longest Increasing Subsequence
● Dijkstra Algorithm

Recursive Backtracking
● N-queen Problem
● Maze-Solving Algorithm

Randomized Algorithms (very advanced topics)


● Quick sort
● Max Cut (Monte Carlo Algorithm)
● Karger’s Min Cut Algorithm
Databases
Relational Databases
● Referential integrity
○ Referential integrity is a property of data stating that all its references are valid. In
the context of relational databases, it requires that if a value of one attribute
(column) of a relation (table) references a value of another attribute (either in the
same or a different relation), then the referenced value must exist.
● Keys
○ In the relational model of databases, a primary key is a specific choice of a
minimal set of attributes (columns) that uniquely specify a tuple (row) in a relation
(table). Informally, a primary key is "which attributes identify a record," and in
simple cases constitute a single attribute: a unique ID. More formally, a primary
key is a choice of candidate key (a minimal superkey); any other candidate key is
an alternate key.

○ A primary key may consist of real-world observables, in which case it is called a


natural key, while an attribute created to function as a key and not used for
identification outside the database is called a surrogate key.
● ERD
● Normalization
○ Data integrity is the overall accuracy, completeness, and consistency of data
○ Database normalization is the process of structuring a database, usually a
relational database, in accordance with a series of so-called normal forms in
order to reduce data redundancy and improve data integrity. Normalization
provides solution for undesirable side effects during data modification such as
Update Anomaly, Insertion Anomaly, Deletion Anomaly
○ Denormalization is a strategy used on a previously-normalized database to
increase performance at the expense of losing some write performance, by
adding redundant copies of data or by grouping data.

● Procedures and User Defined Functions (UDF)


○ A procedure is a named block which performs one or more specific tasks. This is
similar to a procedure in other programming languages. A procedure has a
header and a body.
○ A function is a named Block which is similar to a procedure. The major is that a
function must always return a value, but a procedure may or may not return a
value. Function allows only SELECT statements and hence cannot be used to
modify data.
ACID (Atomicity, Consistency, Isolation, Durability)
● Atomicity
○ All changes to data are performed as if they are a single operation. That is, all the
changes are performed, or none of them are. For example, in an application that
transfers funds from one account to another, the atomicity property ensures that,
if a debit is made successfully from one account, the corresponding credit is
made to the other account. Achieved through transaction logs.
● Consistency
○ Consistency ensures that a transaction can only bring the database from one
valid state to another, maintaining database invariants: any data written to the
database must be valid according to all defined rules, including constraints,
cascades, triggers, and any combination thereof.Data is in a consistent state
when a transaction starts and when it ends.
● Isolation
○ Transactions are often executed concurrently (e.g., multiple transactions reading
and writing to a table at the same time). Isolation ensures that concurrent
execution of transactions leaves the database in the same state that would have
been obtained if the transactions were executed sequentially. Isolation is the
main goal of concurrency control; depending on the method used, the effects of
an incomplete transaction might not even be visible to other transactions.
● Durability
○ Durability guarantees that once a transaction has been committed, it will remain
committed even in the case of a system failure (e.g., power outage or crash).
This usually means that completed transactions (or their effects) are recorded in
non-volatile memory. Achieved through transaction logs.

Transactional logs
● Article on transaction logs by percona. Link
● Transaction logs are used to guarantee atomicity and durability using write-ahead
logging.
○ Write-ahead logging (WAL) is a family of techniques for providing atomicity and
durability (two of the ACID properties) in database systems. The changes are
first recorded in the log, which must be written (flushed) to stable storage, before
the changes are written to the database.

MVCC
Multiversion concurrency control (MCC or MVCC), is a non-locking concurrency control method
commonly used by database management systems to provide concurrent access to the
database and in programming languages to implement transactional memory. When an MVCC
database needs to update a piece of data, it will not overwrite the original data item with new
data, but instead creates a newer version of the data item. Thus there are multiple versions
stored.
Shard (Database) and Table Partitioning
● Shard
○ Horizontal partitions of the table residing on a separate database instance/node.
● Partition
○ Partition of a table residing in the same database instance. This can speed the
look up. Partitions can easily be truncated.

Indexes
● Full index vs Partial index
● Hash indexes
● B Tree index
● Clustering

Database Replication, Consistency problems in replication


In memory key-value database

Data models
● Relational Model
○ ERD, RDBMS
● Object Model
○ ORM, ORDBMS
● Document Model
○ MongoDB
● Graph Model
○ Graph Databases like Neo4j, ArangoDB
● Multivalue Model
● Network Model

NoSQL
● Graph Databases
● Key-value
● document stores
● Column-oriented DBMS
● Time Series
● Vector Databases

Row Oriented and Column Oriented Stores


OLAP vs OLTP systems
Database Optimization Techniques
● Read Optimization
○ Query optimization
■ Using EXPLAIN / ANALYZE
■ Indexes
■ Reducing Joins if possible
○ Table Partitioning
○ Database Replication (Read queries can be distributed across multiple database
servers to reduce load)
○ Table sharding
● Write Optimization

SQL databases

Difference between MyISAM and InnoDB

Know the features and differences


● MySQL
● PostgreSQL
● Oracle
● MS SQL Server
Web and Networks

TCP/IP stack

HTTP Request/Response Cycle

Session/Cookie Management (File vs Database)

Content Delivery Network


● A content delivery network (CDN) is a group of geographically distributed servers that
speed up the delivery of web content by bringing it closer to where users are. Data
centers across the globe use caching, a process that temporarily stores copies of files,
so that you can access internet content from a web-enabled device or browser more
quickly through a server near you. CDNs cache content like web pages, images, and
video in proxy servers near to your physical location.

Load Balancing
In computing, load balancing refers to the process of distributing a set of tasks over a set of
resources (computing units), with the aim of making their overall processing more efficient. Load
balancing can optimize the response time and avoid unevenly overloading some compute
nodes while other compute nodes are left idle.
https://cloud.google.com/load-balancing/docs

● HTTP Load Balancing


○ HTTP Load balancing refers to efficiently distributing incoming HTTP traffic
across a group of backend servers, also known as a server farm or server pool. If
a single server goes down, the load balancer redirects traffic to the remaining
online servers. When a new server is added to the server group, the load
balancer automatically starts to send requests to it. Nginx, HAProxy are
examples.

● DNS Load Balancing


○ DNS load balancing is the practice of configuring a domain in the Domain Name
System (DNS) such that client requests to the domain are distributed across a
group of server machines. A domain can correspond to a website, a mail system,
a print server, or another service that is made accessible via the Internet.

● Database Load Balancing


○ A database Load Balancer is a middleware service that stands between
applications and databases. It distributes the workload across multiple database
servers running behind it. The goals of having database load balancing are to
provide a single database endpoint to applications to connect to, increase
queries throughput, minimize latency and maximize resource utilization of the
database servers. Read-only queries can be distributed across multiple database
servers to increase performance. ProxySQL is a MySQL load balancer.

Cache
● HTTP Caching
○ Browser Cache
○ Shared Proxy Cache
● Query Caching
● Full-Page Cache
● CDN caching

Task queues

Message Broker
● A message broker is software that enables applications, systems, and services to
communicate with each other and exchange information. The message broker does this
by translating messages between formal messaging protocols. This allows
interdependent services to “talk” with one another directly, even if they were written in
different languages or implemented on different platforms.
● Message brokers offer two basic message distribution patterns or messaging styles

○ Point-to-point messaging
■ one-to-one relationship between the message’s sender and receiver.
Each message in the queue is sent to only one recipient and is consumed
only once

○ Publish/subscribe messaging
■ The producer of each message publishes it to a topic, and multiple
message consumers subscribe to topics from which they want to receive
messages. All messages published to a topic are distributed to all the
applications subscribed to it. Kafka has a Pub-Sub model

Asynchronous requests
● Asynchrony, in computer programming, refers to the occurrence of events independent
of the main program flow and ways to deal with such events. These may be "outside"
events such as the arrival of signals, or actions instigated by a program that take place
concurrently with program execution, without the program blocking to wait for results.

Web sockets
● The WebSocket API is an advanced technology that makes it possible to open a
two-way interactive communication session between the user's browser and a server.
With this API, you can send messages to a server and receive event-driven responses
without having to poll the server for a reply.
WebRTC
● With WebRTC, you can add real-time communication capabilities to your application that
works on top of an open standard. It supports video, voice, and generic data to be sent
between peers, allowing developers to build powerful voice- and video-communication
solutions. The technology is available on all modern browsers as well as on native
clients for all major platforms. The technologies behind WebRTC are implemented as an
open web standard and available as regular JavaScript APIs in all major browsers.
Distributed Systems
CAP theorem

The CAP theorem says that a distributed system can deliver only two of three desired
characteristics: consistency, availability and partition tolerance (the ‘C,’ ‘A’ and ‘P’ in CAP).

https://www.ibm.com/topics/cap-theorem

Consistency in Distributed Systems


● Consistency is a property of the distributed system which ensures that every node or
replica has the same view of data at a given time, irrespective of which client has
updated the data. Note that the consistency here is different from consistency in ACID
property in database systems.

Shared-nothing architecture

Clock synchronization

Architectures of Distributed Systems (client-server, 3-tier, n-tier, peer-to-peer)

Leader Election
● Paxos class of Algorithms

High Availability
● High availability is the ability of an IT system to be accessible and reliable nearly 100%
of the time, eliminating or minimizing downtime. It combines two concepts to determine if
an IT system is meeting its operational performance level: that a given service or server
is accessible–or available–almost 100% of the time without downtime, and that the
service or server performs to reasonable expectations for an established time period.

Fault tolerance
● Fault tolerance is a process that enables an operating system to respond to a failure in
hardware or software. This fault-tolerance definition refers to the system's ability to
continue operating despite failures or malfunctions.

Single point of failure


● A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire
system from working. SPOFs are undesirable in any system with a goal of high
availability or reliability, be it a business practice, software application, or other industrial
system.
Data Engineering

Data Warehouse
● A data warehouse is a type of data management system that is designed to enable and
support business intelligence (BI) activities, especially analytics. Data warehouses are
solely intended to perform queries and analysis and often contain large amounts of
historical data. The data within a data warehouse is usually derived from a wide range of
sources such as application log files and transaction applications.
● A data warehouse centralizes and consolidates large amounts of data from multiple
sources. Over time, it builds a historical record that can be invaluable to data scientists
and business analysts. Because of these capabilities, a data warehouse can be
considered an organization’s “single source of truth.”
● Metadata repository

ETL/ELT
● Extract, transform, and load (ETL) is a data pipeline used to collect data from various
sources. It then transforms the data according to business rules, and it loads the data
into a destination data store. The transformation work in ETL takes place in a specialized
engine, and it often involves using staging tables to temporarily hold data as it is being
transformed and ultimately loaded to its destination.

Data streaming
● Streaming data is data that is generated continuously by thousands of data sources.
Streaming data includes a wide variety of data such as log files generated by customers
using web applications, ecommerce purchases, in-game player activity, financial trading,
or geospatial services

● Streaming data processing requires two layers: a storage layer and a processing
layer.
● The storage layer needs to support record ordering and strong consistency to enable
fast, inexpensive, and replayable reads and writes of large streams of data.
● The processing layer is responsible for consuming data from the storage layer, running
computations on that data, and then notifying the storage layer to delete data that is no
longer needed. Data streaming systems need to incorporate solutions to challenges like
scalability, data durability, and fault tolerance in both the storage and processing layers.

Stream processing
Column stores
Apache Parquet
Apache Arrow
Workflow Orchestration
Lambda Architecture
Kappa Architecture
Data Engineering Tools and Technologies

● Apache Airflow
● Apache Nifi
● AWS Glue
● Google Dataflow AWS Glue vs Google Dataflow
● Talend Airflow vs Talend
● Stitch

Steaming Technologies
● Debezium
● Apache Kafka
● Apache Spark (real time data analytics platform)
○ Pyspark (python library for apache spark)

Data warehouse solutions


● Amazon Redshift
● Google Bigquery
● Snowflake
● Oracle Data Warehousing
● Databricks
● Delta lakes
● Apache Hive

Transformation tools

● DBT
Test Automation

Unit Testing
● Stubs
● Mock
● Monkey Patching
● Parametrization

Integration Testing

API Testing

Performance Testing
● Jmeter

Interface Testing

Test Driven Development


Operating Systems
Inter Process Communication (IPC)
● File
● Shared Memory
● Message Queue
● Sockets
○ Network Sockets
○ File Sockets (Unix Domain Socket)
● Pipes

Processes and scheduling


● Scheduling Algorithms
○ First Come First Serve (FCFS)
○ Shortest Job First (SJF)
○ Round Robin
○ Priority based scheduling
○ Multilevel feedback queues

Thread Synchronization
● Race Condition
Race condition occurs when multiple threads try to access a shared resource at
the same time. A race condition is a bug where the outcome of concurrent
threads is dependent on the precise sequence of the execution of one relative to
the other. Thread (or process) synchronization deals with developing techniques
to avoid race conditions.

● Critical Section
Different codes or processes may consist of the same variable or other resources
that need to be read or written but whose results depend on the order in which
the actions occur. For example, if a variable x is to be read by process A, and
process B has to write to the same variable x at the same time, process A might
get either the old or new value of x.

● Mutual Exclusion
Mutual exclusion is a property of concurrency control, which is instituted for the
purpose of preventing race conditions. It is the requirement that one thread of
execution never enters a critical section while a concurrent thread of execution is
already accessing the critical section.

○ Locks
○ Mutex
○ Semaphores
○ Monitors
Deadlocks
● Deadlock avoidance

Memory Management
● Virtual Memory, Paging, Translation Lookaside Buffer (TLB)

Secondary Storage
● Inodes
DevOps

Continuous Integration (CI)

Continuous integration (CI) is the practice of automating the integration of code changes from
multiple contributors into a single software project. It’s a primary DevOps best practice, allowing
developers to frequently merge code changes into a central repository where builds and tests
then run. Automated tools are used to assert the new code’s correctness before integration.

A source code version control system is the crux of the CI process. The version control system
is also supplemented with other checks like automated code quality tests, syntax style review
tools, and more.

Continuous Delivery (CD)


Continuous Deployment (CD)
Continuous Monitoring
Infrastructure Monitoring
Application Monitoring
Container Orchestration
Infrastructure as a Code (IaaS)
Monitoring and Logging

Application Architecture

Bastion Host

Software Development Life Cycle


● System Analysis
● System Design (System Architecture, Database designing, UML diagrams etc)
● Development / Implementation
● Integration and Testing
● Acceptance, Installation and Deployment
● Maintenance
● Disposal

Waterflow Methodology

Agile Development
● Sprint
● Scrum and Daily Standups
● Kanban

Spiral Model
Design Patterns and Principles

Fluent interface

Creational
● Singleton
● Factory
● Object Pool
● Lazy Initialization

Behavioral
● Iterator
● Observer (Publish/Subscribe)
● State
● Template method method

Structural
● Composite
● Decorator
● Module
● Proxy

Concurrency
● Active Object
● Message Design Pattern
● Monitor
● Thread Pool

Design Principles
● SOLID
○ Single Responsibility Principle
○ Open-closed Principle
○ Liskov Substitution
○ Inversion of control
○ Dependency Injection
● GRASP
○ Information Expert
○ Creator
○ Indirection
○ Low Coupling
○ High Cohesion
○ Polymorphism
○ Protected Variations
○ Pure fabrication
● KISS
○ Keep It Simple Stupid
● DRY
○ Don’t Repeat Yourself

Architectural Patterns
● Layered
● Client-server
● Peer-to-Peer
● Master-Slave
● Microservice
● MVC

Fluent Interface
● In software engineering, a fluent interface is an object-oriented API whose design relies
extensively on method chaining. Its goal is to increase code legibility by creating a
domain-specific language (DSL). The term was coined in 2005 by Eric Evans and Martin
Fowler.
● A fluent interface is normally implemented by using method chaining to implement
method cascading (in languages that do not natively support cascading), concretely by
having each method return the object to which it is attached, often referred to as this or
self.
Cloud Computing

Google
● Compute Engine (IaaS)
● Cloud SQL (database as a cloud)
● BigQuery (data warehouse)
● Cloud Storage (IaaS)
● Cloud functions
● App Engine
● Bigtable
● IAM
● Cloud Shell

AWS
● EC2 (Elastic Compute)
● RDS
● AWS Load balancer
● Cloudfront
● Cloudwatch
● Redshift (data warehouse)
● AWS Lambda
● AWS S3 (IaaS)
● DynamoDB
● Security Groups
● IAM

Microsoft

Vendor Locking
Sticky Sessions
Links

https://aosabook.org/en/

The Architecture of Open Source Applications (Volume 2)


Scalable Web Architecture and Distributed Systems
https://aosabook.org/en/v2/distsys.html

Continuous Integration
https://aosabook.org/en/v1/integration.html

Architecture of nginx
https://aosabook.org/en/v2/nginx.html

Partitioning vs. Federation vs. Sharding

4 Vs of Big Data
Random notes

You’re a great engineer if you know the definition of:

- idempotent
- monoid
- decoupled
- dependency injection
- unit
- functional programming
- asynchronous vs parallel programming
- thread locking
- eventual consistency
- exactly-once semantics
- lambda vs kappa architecture
- push vs pull architectures
- write-audit-publish pattern

Other Concepts
Serialization/Marshaling, Encoding, Encryption, Hashing,

Links

CAP theorem
PACELC theorem

Cryptographically secure random number

https://docs.python.org/3/library/secrets.html

https://shopify.engineering/read-consistency-database-replicas

https://towardsdatascience.com/how-to-make-your-pandas-operation-100x-faster-81ebcd09265
c

https://cloud.google.com/blog/products/databases/why-you-should-pick-strong-consistency-whe
never-possible
https://www.percona.com/blog/2007/12/19/mvcc-transaction-ids-log-sequence-numbers-and-sn
apshots/

Migrating Facebook to MySQL 8.0


https://engineering.fb.com/2021/07/22/data-infrastructure/mysql/

System Design Interviews


https://www.educative.io/courses/grokking-the-system-design-interview/B8nMkqBWONo

Block (storage) vs Object Storage


https://www.ibm.com/cloud/learn/object-storage

Excerpt from stackoverflow


One of the most beautiful description of polymorphism

First of all, what is polymoprhism? In the context of type systems, polymorphism allows a single
term to have several types. The problem here is that the word type itself is heavily overloaded in
the computer science and programming language community. So to minimize the confusion,
let's just reintroduce it here, to be on the same page2. A type of a term usually denotes some
approximation of the term semantics. Where semantics could be as simple as a set of values
equipped with a set of operations or something more complex, like effects, annotations, and
arbitrary theories. In general, semantics denotes a set of all possible behaviors of a term. A type
system‡ denotes a set of rules, that allows some language constructs and disallows others
based on their types. I.e., it verifies that compositions of terms behave correctly. For example, if
there is a function application construct in a language the type system will allow an application
only to those arguments that have types that match with the types of parameters. And that's
where polymorphism comes into play. In monomorphic type systems, this match could be only
one to one, i.e., literal. Polymorphic type systems provide mechanisms to specify some regular
expression that will match with a family of types. So, different kinds of polymorphism are simply
different kinds of regular expressions that you may use to denote the family of types.
Chat GPT answer

In Apache Kafka, the decision on which broker (node) a message should be sent to is primarily
made by the Kafka client. Here’s how it works:

1. Topic Partitioning: When a producer sends a message, the client first determines the
partition within the topic where the message should be placed. This decision is made
based on the partitioning strategy:
○ Default Partitioning: If a key is provided, Kafka uses a hash of the key to
determine the partition. If no key is provided, it can use a round-robin approach
across partitions.
○ Custom Partitioning: Custom partitioners can be implemented if you need more
control.
2. Broker Assignment: Once the partition is determined, the client looks up the partition’s
leader. The leader for each partition is determined by the Kafka cluster’s controller and
stored in ZooKeeper (or in newer versions, in the Kafka metadata itself). The leader
broker is responsible for receiving all the writes for that partition.
3. Client Communication: The Kafka client then directly sends the message to the leader
broker for the determined partition.

So, while the Kafka cluster manages metadata and broker leadership, the Kafka client is
responsible for choosing the partition (and hence the broker) when sending data.

My recommended materials for cracking your next technical interview

Coding
- Leetcode
- Cracking the coding interview book
- Neetcode

System Design Interview


- System Design Interview Book 1, 2 by Alex Xu, Sahn Lam
- Grokking the system design by Design Guru
- Design Data-intensive Application book

Behavioral interview
- Tech Interview Handbook (Github repo)
- A Life Engineered (YT)
- STAR method (general method)

OOD Interview
- Interviewready
- OOD by educative
- Head First Design Patterns Book

Mock interviews
- Interviewingio
- Pramp
- Meetapro

Apply for Jobs


- Linkedin
- Monster
- Indeed

Over to you: What is your favorite interview prep material?

How Uber calculates ETA


https://newsletter.systemdesign.one/p/uber-eta

You might also like