0% found this document useful (0 votes)
2 views

note

The document is a team notebook for PTIT.Nutriboost, dated November 15, 2024, containing a comprehensive table of contents that outlines various topics in algorithms, data structures, dynamic programming, graphs, geometry, linear algebra, maths, and number theory. Each section includes specific algorithms and methods, such as Mo's Algorithm, Dijkstra, and the Chinese Remainder Theorem, among others. The document serves as a reference guide for advanced computational techniques and mathematical concepts.

Uploaded by

tran hieu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

note

The document is a team notebook for PTIT.Nutriboost, dated November 15, 2024, containing a comprehensive table of contents that outlines various topics in algorithms, data structures, dynamic programming, graphs, geometry, linear algebra, maths, and number theory. Each section includes specific algorithms and methods, such as Mo's Algorithm, Dijkstra, and the Chinese Remainder Theorem, among others. The document serves as a reference guide for advanced computational techniques and mathematical concepts.

Uploaded by

tran hieu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Team notebook

PTIT.Nutriboost

November 15, 2024

Contents 4 Geometry 12 6 Linear Algebra 21


4.1 Closest Pair Problem . . . . . . . . . . . . . . 12 6.1 Matrix Determinant . . . . . . . . . . . . . . 21
1 Algorithms 1 4.2 Convex Diameter . . . . . . . . . . . . . . . . 12 6.2 Matrix Inverse . . . . . . . . . . . . . . . . . 21
1.1 Mo’s Algorithm . . . . . . . . . . . . . . . . . 1 6.3 PolyRoots . . . . . . . . . . . . . . . . . . . . 21
4.3 Pick Theorem . . . . . . . . . . . . . . . . . . 13
1.2 Mo’s Algorithms on Trees . . . . . . . . . . . 1 6.4 Polynomial . . . . . . . . . . . . . . . . . . . 21
4.4 Polygon Area . . . . . . . . . . . . . . . . . . 13
1.3 Mo’s With Update . . . . . . . . . . . . . . . 1
1.4 Parallel Binary Search . . . . . . . . . . . . . 2 4.5 Square . . . . . . . . . . . . . . . . . . . . . . 13 7 Maths 21
4.6 Triangle . . . . . . . . . . . . . . . . . . . . . 14 7.1 Factorial Approximate . . . . . . . . . . . . . 21
2 Data Structures 3 7.2 Factorial . . . . . . . . . . . . . . . . . . . . . 22
2.1 Binary Index Tree . . . . . . . . . . . . . . . 3 5 Graphs 14 7.3 Fast Fourier Transform . . . . . . . . . . . . . 22
2.2 DSU Roll Back . . . . . . . . . . . . . . . . . 3 5.1 Bridges . . . . . . . . . . . . . . . . . . . . . 14 7.4 General purpose numbers . . . . . . . . . . . 22
2.3 Disjoint Set Uninon (DSU) . . . . . . . . . . 3 5.2 Dijkstra . . . . . . . . . . . . . . . . . . . . . 15 7.5 Lucas Theorem . . . . . . . . . . . . . . . . . 23
2.4 Fake Update . . . . . . . . . . . . . . . . . . 3 5.3 Directed MST . . . . . . . . . . . . . . . . . . 15 7.6 Multinomial . . . . . . . . . . . . . . . . . . . 23
2.5 Fenwick Tree 2D . . . . . . . . . . . . . . . . 4 7.7 Number Theoretic Transform . . . . . . . . . 23
5.4 Edge Coloring . . . . . . . . . . . . . . . . . . 15
2.6 Fenwick Tree . . . . . . . . . . . . . . . . . . 4 7.8 Others . . . . . . . . . . . . . . . . . . . . . . 24
5.5 Eulerian Path . . . . . . . . . . . . . . . . . . 16 7.9 Permutation To Int . . . . . . . . . . . . . . . 24
2.7 HLD with Euler Tour . . . . . . . . . . . . . 4
5.6 Floyd - Warshall . . . . . . . . . . . . . . . . 16 7.10 Sigma Function . . . . . . . . . . . . . . . . . 24
2.8 Hash Table . . . . . . . . . . . . . . . . . . . 5
5.7 Ford - Bellman . . . . . . . . . . . . . . . . . 16
2.9 Li Chao Tree . . . . . . . . . . . . . . . . . . 5
5.8 Gomory Hu . . . . . . . . . . . . . . . . . . . 16 8 Misc 24
2.10 Line Container . . . . . . . . . . . . . . . . . 6
5.9 Karp Min Mean Cycle . . . . . . . . . . . . . 17 8.1 Dates . . . . . . . . . . . . . . . . . . . . . . 24
2.11 Link Cut Tree . . . . . . . . . . . . . . . . . . 6
8.2 Debugging Tricks . . . . . . . . . . . . . . . . 25
2.12 Mo Queries . . . . . . . . . . . . . . . . . . . 7 5.10 Konig’s Theorem . . . . . . . . . . . . . . . . 17
8.3 Interval Container . . . . . . . . . . . . . . . 25
2.13 Persistent DSU . . . . . . . . . . . . . . . . . 7 5.11 LCA Euler Tour . . . . . . . . . . . . . . . . 17
8.4 Optimization Tricks . . . . . . . . . . . . . . 25
2.14 Range Minimum Query . . . . . . . . . . . . 8 5.12 LCA . . . . . . . . . . . . . . . . . . . . . . . 17 8.4.1 Bit hacks . . . . . . . . . . . . . . . . 25
2.15 SQRT Tree . . . . . . . . . . . . . . . . . . . 8 5.13 Manhattan MST . . . . . . . . . . . . . . . . 18 8.4.2 Pragmas . . . . . . . . . . . . . . . . . 25
2.16 STL Treap . . . . . . . . . . . . . . . . . . . 9 5.14 Math . . . . . . . . . . . . . . . . . . . . . . . 18 8.5 Ternary Search . . . . . . . . . . . . . . . . . 25
2.17 Segment Tree . . . . . . . . . . . . . . . . . . 10 5.15 Minimum Path Cover in DAG . . . . . . . . . 18
2.18 Sparse Table . . . . . . . . . . . . . . . . . . 10 5.16 Planar Graph (Euler) . . . . . . . . . . . . . 18 9 Number Theory 25
2.19 Trie . . . . . . . . . . . . . . . . . . . . . . . 10 9.1 Chinese Remainder Theorem . . . . . . . . . 25
5.17 Push Relabel . . . . . . . . . . . . . . . . . . 18
2.20 Wavelet Tree . . . . . . . . . . . . . . . . . . 11 9.2 Convolution . . . . . . . . . . . . . . . . . . . 26
5.18 SCC Kosaraju . . . . . . . . . . . . . . . . . 19 9.3 Diophantine Equations . . . . . . . . . . . . . 26
3 Dynamic Programming Optimization 11 5.19 Tarjan SCC . . . . . . . . . . . . . . . . . . . 19 9.4 Discrete Logarithm . . . . . . . . . . . . . . . 26
3.1 Convex Hull Trick . . . . . . . . . . . . . . . 11 5.20 Topological Sort . . . . . . . . . . . . . . . . 20 9.5 Ext Euclidean . . . . . . . . . . . . . . . . . . 27
3.2 Divide and Conquer . . . . . . . . . . . . . . 12 5.21 Virtual Tree . . . . . . . . . . . . . . . . . . . 20 9.6 Fast Eratosthenes . . . . . . . . . . . . . . . . 27

1
PTIT.Nutriboost 2

9.7 Highest Exponent Factorial . . . . . . . . . . 27 return (A.l/block_size != B.l/block_size)? void update(int &L, int &R, int qL, int qR){
9.8 Miller - Rabin . . . . . . . . . . . . . . . . . . 27 (A.l/block_size < B.l/block_size) : (A.r < while (L > qL) add(--L);
B.r); while (R < qR) add(++R);
9.9 Mod Integer . . . . . . . . . . . . . . . . . . . 27
});
9.10 Mod Inv . . . . . . . . . . . . . . . . . . . . . 28 vector <int> res; while (L < qL) del(L++);
9.11 Mod Mul . . . . . . . . . . . . . . . . . . . . 28 res.resize((int)Q.size()); while (R > qR) del(R--);
9.12 Mod Pow . . . . . . . . . . . . . . . . . . . . 28 }
9.13 Number Theoretic Transform . . . . . . . . . 28 int L = 1, R = 0;
for(query q: Q){ vector <int> MoQueries(int n, vector <query> Q){
9.14 Pollard Rho Factorize . . . . . . . . . . . . . 28 while (L > q.l) add(--L); block_size = sqrt((int)nodes.size());
9.15 Primes . . . . . . . . . . . . . . . . . . . . . . 28 while (R < q.r) add(++R); sort(Q.begin(), Q.end(), [](const query &A, const
9.16 Totient Sieve . . . . . . . . . . . . . . . . . . 29 query &B){
9.17 Totient . . . . . . . . . . . . . . . . . . . . . . 29 while (L < q.l) del(L++); return (ST[A.l]/block_size !=
while (R > q.r) del(R--); ST[B.l]/block_size)? (ST[A.l]/block_size <
ST[B.l]/block_size) : (ST[A.r] < ST[B.r]);
10 Probability and Statistics 29 res[q.pos] = calc(1, R-L+1); });
10.1 Continuous Distributions . . . . . . . . . . . 29 } vector <int> res;
10.1.1 Uniform distribution . . . . . . . . . . 29 return res; res.resize((int)Q.size());
10.1.2 Exponential distribution . . . . . . . . 29 }
10.1.3 Normal distribution . . . . . . . . . . 29 LCA lca;
lca.initialize(n);
10.2 Discrete Distributions . . . . . . . . . . . . . 29
10.2.1 Binomial distribution . . . . . . . . . 29 1.2 Mo’s Algorithms on Trees int L = 1, R = 0;
10.2.2 First success distribution . . . . . . . 30 for(query q: Q){
10.2.3 Poisson distribution . . . . . . . . . . 30 int u = q.l, v = q.r;
/* if(ST[u] > ST[v]) swap(u, v); // assume that
10.3 Probability Theory . . . . . . . . . . . . . . . 30 Given a tree with N nodes and Q queries. Each node has S[u] <= S[v]
an integer weight. int parent = lca.get(u, v);
11 Strings 30 Each query provides two numbers u and v, ask for how
11.1 Hashing . . . . . . . . . . . . . . . . . . . . . 30 many different integers weight of nodes if(parent == u){
there are on path from u to v. int qL = ST[u], qR = ST[v];
11.2 Incremental Aho Corasick . . . . . . . . . . . 30
update(L, R, qL, qR);
11.3 KMP . . . . . . . . . . . . . . . . . . . . . . . 31 ---------- }else{
11.4 Minimal String Rotation . . . . . . . . . . . . 31 Modify DFS: int qL = EN[u], qR = ST[v];
11.5 Suffix Array . . . . . . . . . . . . . . . . . . . 31 ---------- update(L, R, qL, qR);
11.6 Suffix Automation . . . . . . . . . . . . . . . 32 For each node u, maintain the start and the end DFS if(cnt_val[a[parent]] == 0)
time. Let’s call them ST(u) and EN(u). res[q.pos] += 1;
11.7 Suffix Tree . . . . . . . . . . . . . . . . . . . . 32 => For each query, a node is considered if its }
11.8 Z Algorithm . . . . . . . . . . . . . . . . . . . 33 occurrence count is one.
res[q.pos] += cur_ans;
-------------- }
1 Algorithms Query solving: return res;
-------------- }
1.1 Mo’s Algorithm Let’s query be (u, v). Assume that ST(u) <= ST(v).
Denotes P as LCA(u, v).

/* Case 1: P = u 1.3 Mo’s With Update


https://www.spoj.com/problems/FREQ2/ Our query would be in range [ST(u), ST(v)].
*/
vector <int> MoQueries(int n, vector <query> Q){ Case 2: P != u // Tested:
Our query would be in range [EN(u), ST(v)] + [ST(p), // - https://www.spoj.com/problems/ADAUNIQ/
block_size = sqrt(n); ST(p)] //
sort(Q.begin(), Q.end(), [](const query &A, const */ // Notes:
query &B){ // - Updates must be set: A(u) = val
PTIT.Nutriboost 3

// - When implementing Update(id, new_value, cur_l, // move to [l, r] else


cur_r) -> void: if (cur_l < 0) { {
// [cur_l, cur_r] = current segment for (int i = query.l; i <= query.r; ++i) update(1, qa[idx]);
// we need to handle the case where we update an add(i); update(qr[idx]+1, -qa[idx]);
index that is inside cur_l = query.l; update(ql[idx], qa[idx]);
// [cur_l, cur_r] cur_r = query.r; }
// } else { }
// Mo algorithm with updates {{{ while (cur_l > query.l) add(--cur_l);
enum QueryType { GET = 0, UPDATE = 1 }; while (cur_r < query.r) add(++cur_r); bool check(int idx) //Check if the condition is
while (cur_r > query.r) rem(cur_r--); satisfied
struct Query { while (cur_l < query.l) rem(cur_l++); {
int l, r; // For get } int req=reqd[idx];
int u, val, old_val; // For update for(auto &it:owns[idx])
int id; // process updates {
QueryType typ; // should we update more? req-=pref(it);
}; while (cur_update + 1 < (int) updates.size() if(req<0)
&& updates[cur_update + 1].id < break;
template<typename Add, typename Rem, typename Update, query.id) { }
typename Get> ++cur_update; if(req<=0)
void mo_with_updates( update(updates[cur_update].u, return 1;
int n, const vector<Query>& queries, updates[cur_update].val, cur_l, cur_r); return 0;
Add add, Rem rem, Update update, Get get) { } }
// Separate update and get queries // should we update less?
vector<Query> updates, gets; while (cur_update >= 0 && void work()
for (const auto& query : queries) { updates[cur_update].id > query.id) { {
if (query.typ == QueryType::UPDATE) update(updates[cur_update].u, for(int i=1;i<=q;i++)
updates.push_back(query); updates[cur_update].old_val, cur_l, vec[i].clear();
else gets.push_back(query); cur_r); for(int i=1;i<=n;i++)
} --cur_update; if(mid[i]>0)
} vec[mid[i]].push_back(i);
// Sort queries clear();
int S = std::max<int>(1, cbrtl(n + 0.5)); get(query); for(int i=1;i<=q;i++)
S = S * S; } {
} apply(i);
sort(gets.begin(), gets.end(), [&] (const Query& // }}} for(auto &it:vec[i]) //Add appropriate
q1, const Query& q2) { check conditions
int l1 = q1.l / S; {
int l2 = q2.l / S; if(check(it))
if (l1 != l2) return l1 < l2; 1.4 Parallel Binary Search hi[it]=i;
else
int r1 = q1.r / S; lo[it]=i+1;
int r2 = q2.r / S; int lo[N], mid[N], hi[N]; }
if (r1 != r2) return (l1 % 2 == 0) ? r1 < r2 vector<int> vec[N]; }
: r1 > r2; }
void clear() //Reset
return (r1 % 2 == 0) { void parallel_binary()
? q1.id < q2.id memset(bit, 0, sizeof(bit)); {
: q1.id > q2.id; } for(int i=1;i<=n;i++)
}); lo[i]=1, hi[i]=q+1;
void apply(int idx) //Apply ith update/query bool changed = 1;
// Process queries { while(changed)
int cur_l = -1, cur_r = -1, cur_update = -1; if(ql[idx] <= qr[idx]) {
for (const auto& query : gets) { update(ql[idx], qa[idx]), changed=0;
update(qr[idx]+1, -qa[idx]);
PTIT.Nutriboost 4

for(int i=1;i<=n;i++) par[change.back().u] = change.back().par;


{ // Tested: change.pop_back();
if(lo[i]<hi[i]) // - (dynamic connectivity) }
{ https://codeforces.com/gym/100551/problem/A }
changed=1; // - (used for directed MST) };
mid[i]=(lo[i] + hi[i])/2; https://judge.yosupo.jp/problem/directedmst // }}}
} //
else // 0-based
mid[i]=-1; // DSU with rollback {{{
} struct Data { 2.3 Disjoint Set Uninon (DSU)
work(); int time, u, par; // before ‘time‘, ‘par‘ = par[u]
} };
} class DSU{
struct DSU { public:
vector<int> par; vector <int> parent;
vector<Data> change; void initialize(int n){
parent.resize(n+1, -1);
2 Data Structures DSU(int n) : par(n + 5, -1) {} }

2.1 Binary Index Tree // find root of x. int findSet(int u){


// if par[x] < 0 then x is a root, and its tree has while(parent[u] > 0)
-par[x] nodes u = parent[u];
struct BIT { int getRoot(int x) { return u;
int n; while (par[x] >= 0) }
int t[2 * N]; x = par[x];
return x; void Union(int u, int v){
void add(int where, long long what) { } int x = parent[u] + parent[v];
for (where++; where <= n; where += where & if(parent[u] > parent[v]){
-where) { bool same_component(int u, int v) { parent[v] = x;
t[where] += what; return getRoot(u) == getRoot(v); parent[u] = v;
} } }else{
} parent[u] = x;
// join components containing x and y. parent[v] = u;
void add(int from, int to, long long what) { // t should be current time. We use it to update }
add(from, what); ‘change‘. }
add(to + 1, -what); bool join(int x, int y, int t) { };
} x = getRoot(x);
y = getRoot(y);
long long query(int where) { if (x == y) return false;
long long sum = t[0]; 2.4 Fake Update
//union by rank
for (where++; where > 0; where -= where & if (par[x] < par[y]) swap(x, y);
-where) { //now x’s tree has less nodes than y’s tree vector <int> fake_bit[MAXN];
sum += t[where]; change.push_back({t, y, par[y]});
} par[y] += par[x]; void fake_update(int x, int y, int limit_x){
change.push_back({t, x, par[x]}); for(int i = x; i < limit_x; i += i&(-i))
return sum; par[x] = y; fake_bit[i].pb(y);
} return true; }
}; }
void fake_get(int x, int y){
// rollback all changes at time > t. for(int i = x; i >= 1; i -= i&(-i))
void rollback(int t) { fake_bit[i].pb(y);
2.2 DSU Roll Back while (!change.empty() && change.back().time > }
t) {
PTIT.Nutriboost 5

vector <int> bit[MAXN]; bit[i].resize((int)fake_bit[i].size(), 0); int n;


} public:
void update(int x, int y, int limit_x, int val){ void initialize(int _n){
for(int i = x; i < limit_x; i += i&(-i)){ // real update, get operator this->n = _n;
for(int j = lower_bound(fake_bit[i].begin(), int res = 0; fenw.resize(n+1);
fake_bit[i].end(), y) - for(int i = 1; i <= n; i++){ }
fake_bit[i].begin(); j < int maxCurLen = get(a[i].fi-1, a[i].se-1) + 1;
fake_bit[i].size(); j += j&(-j)) res = max(res, maxCurLen); void update(int id, T val) {
bit[i][j] = max(bit[i][j], val); update(a[i].fi, a[i].se, (int)Sx.size(), while (id <= n) {
} maxCurLen); fenw[id] += val;
} } id += id&(-id);
} }
int get(int x, int y){ }
int ans = 0;
for(int i = x; i >= 1; i -= i&(-i)){ T get(int id){
for(int j = lower_bound(fake_bit[i].begin(), 2.5 Fenwick Tree 2D T ans{};
fake_bit[i].end(), y) - while(id >= 1){
fake_bit[i].begin(); j >= 1; j -= j&(-j)) ans += fenw[id];
ans = max(ans, bit[i][j]); #include "FenwickTree.cpp" id -= id&(-id);
} }
return ans; struct FT2 { return ans;
} vector<vi> ys; vector<FT> ft; }
FT2(int limx) : ys(limx) {} };
int main(){ void fakeUpdate(int x, int y) {
_io for (; x < sz(ys); x |= x + 1)
int n; cin >> n; ys[x].push_back(y);
vector <int> Sx, Sy; } 2.7 HLD with Euler Tour
for(int i = 1; i <= n; i++){ void init() {
cin >> a[i].fi >> a[i].se; for (vi& v : ys) sort(all(v)),
Sx.pb(a[i].fi); ft.emplace_back(sz(v)); /*
Sy.pb(a[i].se); } HLD + Euler Tour combine:
} int ind(int x, int y) { 1. Update or Query subtree of u: [st(u), en(u)]
unique_arr(Sx); return (int)(lower_bound(all(ys[x]), y) 2. Update or Query path of (u, v)
unique_arr(Sy); - ys[x].begin()); } */
// unique all value void update(int x, int y, ll dif) { const int N = 1e5 + 9, LG = 18, inf = 1e9 + 9;
for(int i = 1; i <= n; i++){ for (; x < sz(ys); x |= x + 1)
a[i].fi = lower_bound(Sx.begin(), Sx.end(), ft[x].update(ind(x, y), dif); struct ST {
a[i].fi) - Sx.begin(); } #define lc (n << 1)
a[i].se = lower_bound(Sy.begin(), Sy.end(), ll query(int x, int y) { #define rc ((n << 1) | 1)
a[i].se) - Sy.begin(); ll sum = 0; int t[4 * N], lazy[4 * N];
} for (; x; x &= x - 1) ST() {
sum += ft[x-1].query(ind(x-1, y)); fill(t, t + 4 * N, -inf);
// do fake BIT update and get operator return sum; fill(lazy, lazy + 4 * N, 0);
for(int i = 1; i <= n; i++){ } }
fake_get(a[i].fi-1, a[i].se-1); }; inline void push(int n, int b, int e) {
fake_update(a[i].fi, a[i].se, (int)Sx.size()); if(lazy[n] == 0) return;
} t[n] = t[n] + lazy[n];
if(b != e) {
for(int i = 0; i < Sx.size(); i++){ 2.6 Fenwick Tree lazy[lc] = lazy[lc] + lazy[n];
fake_bit[i].pb(INT_MIN); // avoid zero lazy[rc] = lazy[rc] + lazy[n];
sort(fake_bit[i].begin(), fake_bit[i].end()); }
fake_bit[i].resize(unique(fake_bit[i].begin(), template <typename T> lazy[n] = 0;
fake_bit[i].end()) - fake_bit[i].begin()); class FenwickTree{ }
vector <T> fenw; inline int combine(int a, int b) {
PTIT.Nutriboost 6

return max(a, b); //merge left and right queries } 2.8 Hash Table
} int lca(int u, int v) {
inline void pull(int n) { if (dep[u] < dep[v]) swap(u, v);
t[n] = max(t[lc], t[rc]); //merge lower nodes of for (int k = LG; k >= 0; k--) if (dep[par[u][k]] >= /*
the tree to get the parent node dep[v]) u = par[u][k]; * Micro hash table, can be used as a set.
} if (u == v) return u; * Very efficient vs std::set
void build(int n, int b, int e) { for (int k = LG; k >= 0; k--) if (par[u][k] != *
if(b == e) { par[v][k]) u = par[u][k], v = par[v][k]; */
t[n] = 0; return par[u][0];
return; } const int MN = 1001;
} int kth(int u, int k) { struct ht {
int mid = (b + e) >> 1; assert(k >= 0); int _s[(MN + 10) >> 5];
build(lc, b, mid); for (int i = 0; i <= LG; i++) if (k & (1 << i)) u = int len;
build(rc, mid + 1, e); par[u][i]; void set(int id) {
pull(n); return u; len++;
} } _s[id >> 5] |= (1LL << (id & 31));
void upd(int n, int b, int e, int i, int j, int v) { int T, head[N], st[N], en[N]; }
push(n, b, e); void dfs_hld(int u) { bool is_set(int id) {
if(j < b || e < i) return; st[u] = ++T; return _s[id >> 5] & (1LL << (id & 31));
if(i <= b && e <= j) { for (auto v : g[u]) { }
lazy[n] += v; head[v] = (v == g[u][0] ? head[u] : v); };
push(n, b, e); dfs_hld(v);
return; }
} en[u] = T;
int mid = (b + e) >> 1; } 2.9 Li Chao Tree
upd(lc, b, mid, i, j, v);
upd(rc, mid + 1, e, i, j, v); int n;
pull(n); // LiChao SegTree
} int query_path(int u, int v) { // Copied from https://judge.yosupo.jp/submission/60250
int query(int n, int b, int e, int i, int j) { int ans = -inf; //
push(n, b, e); while(head[u] != head[v]) { // Tested:
if(i > e || b > j) return -inf; if (dep[head[u]] < dep[head[v]]) swap(u, v); // - https://judge.yosupo.jp/problem/segment_add_get_min
if(i <= b && e <= j) return t[n]; ans = max(ans, t.query(1, 1, n, st[head[u]], // - https://judge.yosupo.jp/problem/line_add_get_min
int mid = (b + e) >> 1; st[u])); // - (convex hull trick)
return combine(query(lc, b, mid, i, j), query(rc, u = par[head[u]][0]; https://oj.vnoi.info/problem/vmpizza
mid + 1, e, i, j)); } // - https://oj.vnoi.info/problem/vomario
} if (dep[u] > dep[v]) swap(u, v); using ll = long long;
} t; ans = max(ans, t.query(1, 1, n, st[u], st[v])); const ll inf = 2e18;
return ans;
vector<int> g[N]; } struct Line {
int par[N][LG + 1], dep[N], sz[N]; ll m, c;
void dfs(int u, int p = 0) { void update_path(int u, int v, int val) { ll eval(ll x) {
par[u][0] = p; while(head[u] != head[v]) { return m * x + c;
dep[u] = dep[p] + 1; if (dep[head[u]] < dep[head[v]]) swap(u, v); }
sz[u] = 1; t.upd(1, 1, n, st[head[u]], st[u], val); };
for (int i = 1; i <= LG; i++) par[u][i] = u = par[head[u]][0]; struct node {
par[par[u][i - 1]][i - 1]; } Line line;
if (p) g[u].erase(find(g[u].begin(), g[u].end(), p)); if (dep[u] > dep[v]) swap(u, v); node* left = nullptr;
for (auto &v : g[u]) if (v != p) { t.upd(1, 1, n, st[u], st[v], val); node* right = nullptr;
dfs(v, u); } node(Line line) : line(line) {}
sz[u] += sz[v]; //https://www.hackerrank.com/challenges/subtrees-and-paths/problem void add_segment(Line nw, int l, int r, int L, int R)
if(sz[v] > sz[g[u][0]]) swap(v, g[u][0]); {
} if (l > r || r < L || l > R) return;
int m = (l + 1 == r ? l : (l + r) / 2);
PTIT.Nutriboost 7

if (l >= L and r <= R) { int L, R; if (x != begin() && isect(--x, y))


bool lef = nw.eval(l) < line.eval(l); node* root; isect(x, y = erase(y));
bool mid = nw.eval(m) < line.eval(m); LiChaoTree() : L(numeric_limits<int>::min() / 2), while ((y = x) != begin() && (--x)->p >=
if (mid) swap(line, nw); R(numeric_limits<int>::max() / 2), root(nullptr) y->p)
if (l == r) return; {} isect(x, erase(y));
if (lef != mid) { LiChaoTree(int L, int R) : L(L), R(R) { }
if (left == nullptr) left = new node(nw); root = new node({0, inf}); ll qry(ll x) {
else left -> add_segment(nw, l, m, L, R); } assert(!empty());
} void add_line(Line line) { auto l = *lower_bound(x);
else { root -> add_segment(line, L, R, L, R); return l.a * x + l.b;
if (right == nullptr) right = new node(nw); } }
else right -> add_segment(nw, m + 1, r, L, R); // y = mx + b: x in [l, r] };
} void add_segment(Line line, int l, int r) {
return; root -> add_segment(line, L, R, l, r);
} }
if (max(l, L) <= min(m, R)) { ll query(ll x) {
if (left == nullptr) left = new node({0, inf}); return root -> query_segment(x, L, R, L, R); 2.11 Link Cut Tree
left -> add_segment(nw, l, m, L, R); }
} ll query_segment(ll x, int l, int r) {
if (max(m + 1, L) <= min(r, R)) { return root -> query_segment(x, l, r, L, R); /**
if (right == nullptr) right = new node ({0, inf}); } * Author: Simon Lindholm
right -> add_segment(nw, m + 1, r, L, R); }; * Date: 2016-07-25
} // https://judge.yosupo.jp/problem/segment_add_get_min * Source:
} https://github.com/ngthanhtrung23/ACM_Notebook_new/blob/ma
ll query_segment(ll x, int l, int r, int L, int R) { * Description: Represents a forest of unrooted trees.
if (l > r || r < L || l > R) return inf; You can add and remove
int m = (l + 1 == r ? l : (l + r) / 2); 2.10 Line Container * edges (as long as the result is still a forest), and
if (l >= L and r <= R) { check whether
ll ans = line.eval(x); * two nodes are in the same tree.
if (l < r) { struct Line { * Time: All operations take amortized O(\log N).
if (x <= m && left != nullptr) ans = min(ans, mutable ll a, b, p; * Status: Stress-tested a bit for N <= 20
left -> query_segment(x, l, m, L, R)); bool operator<(const Line& o) const { return a */
if (x > m && right != nullptr) ans = min(ans, < o.a; } #pragma once
right -> query_segment(x, m + 1, r, L, R)); bool operator<(ll x) const { return p < x; }
} }; struct Node { // Splay tree. Root’s pp contains tree’s
return ans; parent.
} struct DynamicHull : multiset<Line, less<>> { Node *p = 0, *pp = 0, *c[2];
ll ans = inf; // Maintain to get maximum bool flip = 0;
if (max(l, L) <= min(m, R)) { // (for doubles, use inf = 1/.0, div(a,b) = a/b) Node() { c[0] = c[1] = 0; fix(); }
if (left == nullptr) left = new node({0, inf}); static const ll inf = LLONG_MAX; void fix() {
ans = min(ans, left -> query_segment(x, l, m, L, ll div(ll a, ll b) { // floored division if (c[0]) c[0]->p = this;
R)); return a / b - ((a ^ b) < 0 && a % b); } if (c[1]) c[1]->p = this;
} bool isect(iterator x, iterator y) { // (+ update sum of subtree elements
if (max(m + 1, L) <= min(r, R)) { if (y == end()) return x->p = inf, 0; etc. if wanted)
if (right == nullptr) right = new node ({0, inf}); if (x->a == y->a) x->p = x->b > y->b ? }
ans = min(ans, right -> query_segment(x, m + 1, inf : -inf; void pushFlip() {
r, L, R)); else x->p = div(y->b - x->b, x->a - if (!flip) return;
} y->a); flip = 0; swap(c[0], c[1]);
return ans; return x->p >= y->p; if (c[0]) c[0]->flip ^= 1;
} } if (c[1]) c[1]->flip ^= 1;
}; void add(ll a, ll b) { }
auto z = insert({a, b, 0}), y = z++, x = int up() { return p ? p->c[1] == this : -1; }
struct LiChaoTree { y; void rot(int i, int b) {
while (isect(y, z)) z = erase(z); int h = i ^ b;
PTIT.Nutriboost 8

Node *x = c[i], *y = b == 2 ? x : bool connected(int u, int v) { // are u, v in while (R > q.second) del(--R, 1);
x->c[h], *z = b ? y : x; the same tree? res[qi] = calc();
if ((y->p = p)) p->c[up()] = y; Node* nu = access(&node[u])->first(); }
c[i] = z->c[i ^ 1]; return nu == access(&node[v])->first(); return res;
if (b < 2) { } }
x->c[h] = y->c[h ^ 1]; void makeRoot(Node* u) { /// Move u to root of
y->c[h ^ 1] = x; represented tree. vi moTree(vector<array<int, 2>> Q, vector<vi>& ed, int
} access(u); root=0){
z->c[i ^ 1] = this; u->splay(); int N = sz(ed), pos[2] = {}, blk = 350; //
fix(); x->fix(); y->fix(); if(u->c[0]) { ~N/sqrt(Q)
if (p) p->fix(); u->c[0]->p = 0; vi s(sz(Q)), res = s, I(N), L(N), R(N), in(N),
swap(pp, y->pp); u->c[0]->flip ^= 1; par(N);
} u->c[0]->pp = u; add(0, 0), in[0] = 1;
void splay() { /// Splay this up to the root. u->c[0] = 0; auto dfs = [&](int x, int p, int dep, auto& f)
Always finishes without flip set. u->fix(); -> void {
for (pushFlip(); p; ) { } par[x] = p;
if (p->p) p->p->pushFlip(); } L[x] = N;
p->pushFlip(); pushFlip(); Node* access(Node* u) { /// Move u to root aux if (dep) I[x] = N++;
int c1 = up(), c2 = p->up(); tree. Return the root of the root aux tree. for (int y : ed[x]) if (y != p) f(y, x,
if (c2 == -1) p->rot(c1, 2); u->splay(); !dep, f);
else p->p->rot(c2, c1 != c2); while (Node* pp = u->pp) { if (!dep) I[x] = N++;
} pp->splay(); u->pp = 0; R[x] = N;
} if (pp->c[1]) { };
Node* first() { /// Return the min element of pp->c[1]->p = 0; dfs(root, -1, 0, dfs);
the subtree rooted at this, splayed to the pp->c[1]->pp = pp; } #define K(x) pii(I[x[0]] / blk, I[x[1]] ^ -(I[x[0]] /
top. pp->c[1] = u; pp->fix(); u = pp; blk & 1))
pushFlip(); } iota(all(s), 0);
return c[0] ? c[0]->first() : (splay(), return u; sort(all(s), [&](int s, int t){ return K(Q[s])
this); } < K(Q[t]); });
} }; for (int qi : s) rep(end,0,2) {
}; int &a = pos[end], b = Q[qi][end], i = 0;
#define step(c) { if (in[c]) { del(a, end); in[a] = 0;
struct LinkCut { } \
vector<Node> node; 2.12 Mo Queries else { add(c, end); in[c] = 1; } a =
LinkCut(int N) : node(N) {} c; }
while (!(L[b] <= L[a] && R[a] <= R[b]))
void link(int u, int v) { // add an edge (u, v) void add(int ind, int end) { ... } // add a[ind] (end = I[i++] = b, b = par[b];
assert(!connected(u, v)); 0 or 1) while (a != b) step(par[a]);
makeRoot(&node[u]); void del(int ind, int end) { ... } // remove a[ind] while (i--) step(I[i]);
node[u].pp = &node[v]; int calc() { ... } // compute current answer if (end) res[qi] = calc();
} }
void cut(int u, int v) { // remove an edge (u, vi mo(vector<pii> Q) { return res;
v) int L = 0, R = 0, blk = 350; // ~N/sqrt(Q) }
Node *x = &node[u], *top = &node[v]; vi s(sz(Q)), res = s;
makeRoot(top); x->splay(); #define K(x) pii(x.first/blk, x.second ^ -(x.first/blk
assert(top == (x->pp ?: x->c[0])); & 1))
if (x->pp) x->pp = 0; iota(all(s), 0); 2.13 Persistent DSU
else { sort(all(s), [&](int s, int t){ return K(Q[s])
x->c[0] = top->p = 0; < K(Q[t]); });
x->fix(); for (int qi : s) { // PersistentDSU
} pii q = Q[qi]; //
} while (L > q.first) add(--L, 0); // Notes:
while (R < q.second) add(R++, 1); // - this doesn’t support delete edge operation, so
while (L < q.first) del(L++, 0); isn’t enough to
PTIT.Nutriboost 9

// solve dynamic connectivity problem. vector<Arr::Node*> roots;


// - it has high mem and time usage, so be careful }; SqrtTreeItem op(const SqrtTreeItem &a, const
(both TLE and MLE on SqrtTreeItem &b) {
// https://oj.vnoi.info/problem/hello22_schoolplan) return a + b; //just change this operation for
// different problems,no change is required inside
// Tested: 2.14 Range Minimum Query the code
// - }
https://judge.yosupo.jp/problem/persistent_unionfind
#include "../PersistentArray.h" /* inline int log2Up(int n) {
struct PersistentDSU { return min(v[a], v[a + 1], ..., v[b - 1]) in int res = 0;
int n; constant time while ((1 << res) < n) {
using Arr = PersistentArray<int>; */ res++;
}
PersistentDSU(int _n) : n(_n) { template<class T> return res;
roots.emplace_back(A.build(std::vector<int> (n, struct RMQ { }
-1))); vector<vector<T>> jmp; //0-indexed
} RMQ(const vector<T>& V) : jmp(1, V) { struct SqrtTree {
for (int pw = 1, k = 1; pw * 2 <= sz(V); int n, llg, indexSz;
int find(int version, int u) { pw *= 2, ++k) { vector<SqrtTreeItem> v;
// Note that we can’t do path compression here jmp.emplace_back(sz(V) - pw * 2 + vector<int> clz, layers, onLayer;
int p = A.get(roots[version], u); 1); vector< vector<SqrtTreeItem> > pref, suf, between;
return p < 0 ? u : find(version, p); rep(j,0,sz(jmp[k]))
} jmp[k][j] = min(jmp[k - inline void buildBlock(int layer, int l, int r) {
1][j], jmp[k - 1][j + pref[layer][l] = v[l];
// Note that this will always create a new version, pw]); for (int i = l + 1; i < r; i++) {
// regardless of whether u and v was previously in } pref[layer][i] = op(pref[layer][i - 1], v[i]);
same component. } }
bool merge(int version, int u, int v) { T query(int a, int b) { suf[layer][r - 1] = v[r - 1];
u = find(version, u); assert(a < b); // or return inf if a == b for (int i = r - 2; i >= l; i--) {
v = find(version, v); int dep = 31 - __builtin_clz(b - a); suf[layer][i] = op(v[i], suf[layer][i + 1]);
auto ptr = roots[version]; return min(jmp[dep][a], jmp[dep][b - (1 }
if (u != v) { << dep)]); }
int sz_u = -A.get(ptr, u), sz_v = }
-A.get(ptr, v); }; inline void buildBetween(int layer, int lBound, int
if (sz_u < sz_v) swap(u, v); rBound, int betweenOffs) {
// sz[u] >= sz[v] int bSzLog = (layers[layer] + 1) >> 1;
ptr = A.set(ptr, u, -sz_u - sz_v); int bCntLog = layers[layer] >> 1;
ptr = A.set(ptr, v, u); 2.15 SQRT Tree int bSz = 1 << bSzLog;
} int bCnt = (rBound - lBound + bSz - 1) >> bSzLog;
for (int i = 0; i < bCnt; i++) {
roots.emplace_back(ptr); #include<bits/stdc++.h> SqrtTreeItem ans;
return u != v; using namespace std; for (int j = i; j < bCnt; j++) {
} SqrtTreeItem add = suf[layer][lBound + (j <<
/*Given an array a that contains n elements and the bSzLog)];
int component_size(int version, int u) { operation op that satisfies associative property: ans = (i == j) ? add : op(ans, add);
return -A.get(roots[version], find(version, u)); (x op y) op z=x op (y op z) is true for any x, y, z. between[layer - 1][betweenOffs + lBound + (i <<
} bCntLog) + j] = ans;
The following implementation of Sqrt Tree can perform }
bool same_component(int version, int u, int v) { the following operations: }
return find(version, u) == find(version, v); build in O(nloglogn), }
} answer queries in O(1) and update an element in
O(sqrt(n)).*/ inline void buildBetweenZero() {
Arr A; int bSzLog = (llg + 1) >> 1;
#define SqrtTreeItem int//change for the type you want
PTIT.Nutriboost 10

for (int i = 0; i < indexSz; i++) { } int bSzLog = (llg + 1) >> 1;


v[n + i] = suf[0][i << bSzLog]; if (l + 1 == r) { int bSz = 1 << bSzLog;
} return op(v[l], v[r]); indexSz = (n + bSz - 1) >> bSzLog;
build(1, n, n + indexSz, (1 << llg) - n); } v.resize(n + indexSz);
} int layer = onLayer[clz[(l - base) ^ (r - base)]]; pref.assign(layers.size(), vector<SqrtTreeItem>(n +
int bSzLog = (layers[layer] + 1) >> 1; indexSz));
inline void updateBetweenZero(int bid) { int bCntLog = layers[layer] >> 1; suf.assign(layers.size(), vector<SqrtTreeItem>(n +
int bSzLog = (llg + 1) >> 1; int lBound = (((l - base) >> layers[layer]) << indexSz));
v[n + bid] = suf[0][bid << bSzLog]; layers[layer]) + base; between.assign(betweenLayers,
update(1, n, n + indexSz, (1 << llg) - n, n + bid); int lBlock = ((l - lBound) >> bSzLog) + 1; vector<SqrtTreeItem>((1 << llg) + bSz));
} int rBlock = ((r - lBound) >> bSzLog) - 1; build(0, 0, n, 0);
SqrtTreeItem ans = suf[layer][l]; }
void build(int layer, int lBound, int rBound, int if (lBlock <= rBlock) { };
betweenOffs) { SqrtTreeItem add = (layer == 0) ? ( int main() {
if (layer >= (int)layers.size()) { query(n + lBlock, n + rBlock, (1 << int i, j, k, n, m, q, l, r;
return; llg) - n, n) cin >> n;
} ) : ( vector<int> v;
int bSz = 1 << ((layers[layer] + 1) >> 1); between[layer - 1][betweenOffs + for(i = 0; i < n; i++) cin >> k, v.push_back(k);
for (int l = lBound; l < rBound; l += bSz) { lBound + (lBlock << bCntLog) + SqrtTree t = SqrtTree(v);
int r = min(l + bSz, rBound); rBlock] cin >> q;
buildBlock(layer, l, r); ); while(q--) {
build(layer + 1, l, r, betweenOffs); ans = op(ans, add); cin >> l >> r;
} } --l, --r;
if (layer == 0) { ans = op(ans, pref[layer][r]); cout << t.query(l, r) << endl;
buildBetweenZero(); return ans; }
} else { } }
buildBetween(layer, lBound, rBound, betweenOffs); //
} inline SqrtTreeItem query(int l, int r) { https://cp-algorithms.com/data_structures/sqrt-tree.html
} return query(l, r, 0, 0);
}
void update(int layer, int lBound, int rBound, int
betweenOffs, int x) { inline void update(int x, const SqrtTreeItem &item) { 2.16 STL Treap
if (layer >= (int)layers.size()) { v[x] = item;
return; update(0, 0, n, 0, x);
} } struct Node {
int bSzLog = (layers[layer] + 1) >> 1; Node *l = 0, *r = 0;
int bSz = 1 << bSzLog; SqrtTree(const vector<SqrtTreeItem>& a) int val, y, c = 1;
int blockIdx = (x - lBound) >> bSzLog; : n((int)a.size()), llg(log2Up(n)), v(a), clz(1 << Node(int val) : val(val), y(rand()) {}
int l = lBound + (blockIdx << bSzLog); llg), onLayer(llg + 1) { void recalc();
int r = min(l + bSz, rBound); clz[0] = 0; };
buildBlock(layer, l, r); for (int i = 1; i < (int)clz.size(); i++) {
if (layer == 0) { clz[i] = clz[i >> 1] + 1; int cnt(Node* n) { return n ? n->c : 0; }
updateBetweenZero(blockIdx); } void Node::recalc() { c = cnt(l) + cnt(r) + 1; }
} else { int tllg = llg;
buildBetween(layer, lBound, rBound, betweenOffs); while (tllg > 1) { template<class F> void each(Node* n, F f) {
} onLayer[tllg] = (int)layers.size(); if (n) { each(n->l, f); f(n->val); each(n->r,
update(layer + 1, l, r, betweenOffs, x); layers.push_back(tllg); f); }
} tllg = (tllg + 1) >> 1; }
}
inline SqrtTreeItem query(int l, int r, int for (int i = llg - 1; i >= 0; i--) { pair<Node*, Node*> split(Node* n, int k) {
betweenOffs, int base) { onLayer[i] = max(onLayer[i], onLayer[i + 1]); if (!n) return {};
if (l == r) { } if (cnt(n->l) >= k) { // "n->val >= k" for
return v[l]; int betweenLayers = max(0, (int)layers.size() - 1); lower_bound(k)
auto pa = split(n->l, k);
PTIT.Nutriboost 11

n->l = pa.second; node[seg] += val; assert(0 <= l && l <= r && r < n);
n->recalc(); return; int k = trunc(log2(r - l + 1));
return {pa.first, n}; } return calc(ans[l][k], ans[r - (1 << k) +
} else { int mid = (l + r)/2; 1][k]);
auto pa = split(n->r, k - cnt(n->l) - if(p <= mid){ }
1); // and just "k" modify(2*seg + 1, l, mid, p, val); };
n->r = pa.first; }else{
n->recalc(); modify(2*seg + 2, mid + 1, r, p, val);
return {n, pa.second}; }
} node[seg] = node[2*seg + 1] + node[2*seg + 2];
} } 2.19 Trie
Node* merge(Node* l, Node* r) { int sum(int seg, int l, int r, int a, int b){
if (!l) return r; if(l > b || r < a) return 0; const int MN = 26; // size of alphabet
if (!r) return l; if(l >= a && r <= b) return node[seg]; const int MS = 100010; // Number of states.
if (l->y > r->y) { int mid = (l + r)/2;
l->r = merge(l->r, r); return sum(2*seg + 1, l, mid, a, b) + sum(2*seg + struct trie{
l->recalc(); 2, mid + 1, r, a, b); struct node{
return l; } int c;
} else { int a[MN];
r->l = merge(l, r->l); };
r->recalc();
return r; 2.18 Sparse Table node tree[MS];
} int nodes;
}
void clear(){
Node* ins(Node* t, Node* n, int pos) { template <typename T, typename func = function<T(const tree[nodes].c = 0;
auto pa = split(t, pos); T, const T)>> memset(tree[nodes].a, -1, sizeof tree[nodes].a);
return merge(merge(pa.first, n), pa.second); struct SparseTable { nodes++;
} func calc; }
int n;
// Example application: move the range [l, r) to index k vector<vector<T>> ans; void init(){
void move(Node*& t, int l, int r, int k) { nodes = 0;
Node *a, *b, *c; SparseTable() {} clear();
tie(a,b) = split(t, l); tie(b,c) = split(b, r - }
l); SparseTable(const vector<T>& a, const func& f) :
if (k <= l) t = merge(ins(a, b, k), c); n(a.size()), calc(f) { int add(const string &s, bool query = 0){
else t = merge(a, ins(c, b, k - r)); int last = trunc(log2(n)) + 1; int cur_node = 0;
} ans.resize(n); for(int i = 0; i < s.size(); ++i){
for (int i = 0; i < n; i++){ int id = gid(s[i]);
ans[i].resize(last); if(tree[cur_node].a[id] == -1){
} if(query) return 0;
2.17 Segment Tree for (int i = 0; i < n; i++){ tree[cur_node].a[id] = nodes;
ans[i][0] = a[i]; clear();
} }
#include <bits/stdc++.h> for (int j = 1; j < last; j++){ cur_node = tree[cur_node].a[id];
using namespace std; for (int i = 0; i <= n - (1 << j); i++){ }
ans[i][j] = calc(ans[i][j - 1], ans[i + if(!query) tree[cur_node].c++;
const int N = 1e5 + 10; (1 << (j - 1))][j - 1]); return tree[cur_node].c;
} }
int node[4*N]; }
} };
void modify(int seg, int l, int r, int p, int val){
if(l == r){ T query(int l, int r){
PTIT.Nutriboost 12

2.20 Wavelet Tree if(lo == hi) return lo; line() {};


int inLeft = b[r] - b[l - 1], lb = b[l - 1], rb = line(long a, long b) : a(a), b(b) {};
b[r]; bool operator < (const line &A) const {
if(k <= inLeft) return this->l->kth(lb + 1, rb, k); return pll(a,b) < pll(A.a,A.b);
const int MAXN = (int)3e5 + 9; return this->r->kth(l - lb, r - rb, k - inLeft); }
const int MAXV = (int)1e9 + 9; //maximum value of any } };
element in array //count of numbers in [l, r] Less than or equal to k
//array values can be negative too, use appropriate int LTE(int l, int r, int k) { bool bad(line A, line B, line C){
minimum and maximum value if(l > r || k < lo) return 0; return (C.b - B.b) * (A.a - B.a) <= (B.b - A.b) *
struct wavelet_tree { if(hi <= k) return r - l + 1; (B.a - C.a);
int lo, hi; int lb = b[l - 1], rb = b[r]; }
wavelet_tree *l, *r; return this->l->LTE(lb + 1, rb, k) + this->r->LTE(l
int *b, *c, bsz, csz; // c holds the prefix sum of - lb, r - rb, k); void addLine(vector<line> &memo, line cur){
elements } int k = memo.size();
//count of numbers in [l, r] equal to k while (k >= 2 && bad(memo[k - 2], memo[k - 1],
wavelet_tree() { int count(int l, int r, int k) { cur)){
lo = 1; if(l > r || k < lo || k > hi) return 0; memo.pop_back();
hi = 0; if(lo == hi) return r - l + 1; k--;
bsz = 0; int lb = b[l - 1], rb = b[r]; }
csz = 0, l = NULL; int mid = (lo + hi) >> 1; memo.push_back(cur);
r = NULL; if(k <= mid) return this->l->count(lb + 1, rb, k); }
} return this->r->count(l - lb, r - rb, k);
} long Fn(line A, long x){
void init(int *from, int *to, int x, int y) { //sum of numbers in [l ,r] less than or equal to k return A.a * x + A.b;
lo = x, hi = y; int sum(int l, int r, int k) { }
if(from >= to) return; if(l > r or k < lo) return 0;
int mid = (lo + hi) >> 1; if(hi <= k) return c[r] - c[l - 1]; long query(vector<line> &memo, long x){
auto f = [mid](int x) { int lb = b[l - 1], rb = b[r]; int lo = 0, hi = memo.size() - 1;
return x <= mid; return this->l->sum(lb + 1, rb, k) + this->r->sum(l while (lo != hi){
}; - lb, r - rb, k); int mi = (lo + hi) / 2;
b = (int*)malloc((to - from + 2) * sizeof(int)); } if (Fn(memo[mi], x) > Fn(memo[mi + 1], x)){
bsz = 0; ~wavelet_tree() { lo = mi + 1;
b[bsz++] = 0; delete l; }
c = (int*)malloc((to - from + 2) * sizeof(int)); delete r; else hi = mi;
csz = 0; } }
c[csz++] = 0; }; return Fn(memo[lo], x);
for(auto it = from; it != to; it++) { wavelet_tree t; }
b[bsz] = (b[bsz - 1] + f(*it));
c[csz] = (c[csz - 1] + (*it)); const int N = 1e6 + 1;
bsz++; long dp[N];
csz++;
} 3 Dynamic Programming Optimization int main()
if(hi == lo) return; {
auto pivot = stable_partition(from, to, f); 3.1 Convex Hull Trick fastio;
l = new wavelet_tree(); int n, c; cin >> n >> c;
l->init(from, pivot, lo, mid); vector<line> memo;
r = new wavelet_tree(); #define long long long for (int i = 1; i <= n; i++){
r->init(pivot, to, mid + 1, hi); #define pll pair <long, long> long val; cin >> val;
} #define all(c) c.begin(), c.end() addLine(memo, {-2 * val, val * val + dp[i -
//kth smallest element in [l, r] #define fastio ios_base::sync_with_stdio(false); 1]});
//for array [1,2,1,3,5] 2nd smallest is 1 and 3rd cin.tie(0) dp[i] = query(memo, val) + val * val + c;
smallest is 2 }
int kth(int l, int r, int k) { struct line{ cout << dp[n] << ’\n’;
if(l > r) return 0; long a, b;
PTIT.Nutriboost 13

return 0; }; for (int i = 0; i < yp.size(); ++i) {


} for (int j = i + 1; j < yp.size() && j < i + 7;
double dist(const point &o, const point &p) { ++j) {
double a = p.x - o.x, b = p.y - o.y; d = min(d, dist(yp[i], yp[j]));
return sqrt(a * a + b * b); }
3.2 Divide and Conquer } }
return d;
double cp(vector<point> &p, vector<point> &x, }
/** vector<point> &y) {
* recurrence: if (p.size() < 4) { double closest_pair(vector<point> &p) {
* dp[k][i] = min dp[k-1][j] + c[i][j - 1], for all double best = 1e100; vector<point> x(p.begin(), p.end());
j > i; for (int i = 0; i < p.size(); ++i) sort(x.begin(), x.end(), [](const point &a, const
* for (int j = i + 1; j < p.size(); ++j) point &b) {
* "comp" computes dp[k][i] for all i in O(n log n) (k best = min(best, dist(p[i], p[j])); return a.x < b.x;
is fixed) return best; });
* } vector<point> y(p.begin(), p.end());
* Problems: sort(y.begin(), y.end(), [](const point &a, const
* https://icpc.kattis.com/problems/branch int ls = (p.size() + 1) >> 1; point &b) {
* http://codeforces.com/contest/321/problem/E double l = (p[ls - 1].x + p[ls].x) * 0.5; return a.y < b.y;
* */ vector<point> xl(ls), xr(p.size() - ls); });
unordered_set<int> left; return cp(p, x, y);
void comp(int l, int r, int le, int re) { for (int i = 0; i < ls; ++i) { }
if (l > r) return; xl[i] = x[i];
left.insert(x[i].id);
int mid = (l + r) >> 1; }
for (int i = ls; i < p.size(); ++i) { 4.2 Convex Diameter
int best = max(mid + 1, le); xr[i - ls] = x[i];
dp[cur][mid] = dp[cur ^ 1][best] + cost(mid, best - }
1); struct point{
for (int i = best; i <= re; i++) { vector<point> yl, yr; int x, y;
if (dp[cur][mid] > dp[cur ^ 1][i] + cost(mid, i - vector<point> pl, pr; };
1)) { yl.reserve(ls); yr.reserve(p.size() - ls);
best = i; pl.reserve(ls); pr.reserve(p.size() - ls); struct vec{
dp[cur][mid] = dp[cur ^ 1][i] + cost(mid, i - 1); for (int i = 0; i < p.size(); ++i) { int x, y;
} if (left.count(y[i].id)) };
} yl.push_back(y[i]);
else vec operator - (const point &A, const point &B){
comp(l, mid - 1, le, best); yr.push_back(y[i]); return vec{A.x - B.x, A.y - B.y};
comp(mid + 1, r, best, re); }
} if (left.count(p[i].id))
pl.push_back(p[i]); int cross(vec A, vec B){
else return A.x*B.y - A.y*B.x;
pr.push_back(p[i]); }
}
4 Geometry int cross(point A, point B, point C){
double dl = cp(pl, xl, yl); int val = A.x*(B.y - C.y) + B.x*(C.y - A.y) +
4.1 Closest Pair Problem double dr = cp(pr, xr, yr); C.x*(A.y - B.y);
double d = min(dl, dr); if(val == 0)
vector<point> yp; yp.reserve(p.size()); return 0; // coline
struct point { for (int i = 0; i < p.size(); ++i) { if(val < 0)
double x, y; if (fabs(y[i].x - l) < d) return 1; // clockwise
int id; yp.push_back(y[i]); return -1; //counter clockwise
point() {} } }
point (double a, double b) : x(a), y(b) {}
PTIT.Nutriboost 14

vector <point> findConvexHull(vector <point> points){ int i, maxi, j, maxj; }


vector <point> convex; i = maxi = is;
sort(points.begin(), points.end(), [](const point j = maxj = js;
&A, const point &B){ do{
return (A.x == B.x)? (A.y < B.y): (A.x < B.x); int ni = (i+1)%n, nj = (j+1)%n; 4.4 Polygon Area
}); if(cross(convexHull[ni] - convexHull[i],
vector <point> Up, Down; convexHull[nj] - convexHull[j]) <= 0){
point A = points[0], B = points.back(); j = nj; #include <bits/stdc++.h>
Up.push_back(A); }else{ using namespace std;
Down.push_back(A); i = ni; struct Point {
} int x, y;
for(int i = 0; i < points.size(); i++){ int d = dist(convexHull[i], convexHull[j]); Point(int a = 0, int b = 0) : x(a), y(b) {}
if(i == points.size()-1 || cross(A, points[i], if(d > maxd){ friend istream &operator>>(istream &in, Point
B) > 0){ maxd = d; &p) {
while(Up.size() > 2 && maxi = i; int x, y;
cross(Up[Up.size()-2], Up[Up.size()-1], maxj = j; in >> p.x >> p.y;
points[i]) <= 0) } return in;
Up.pop_back(); }while(i != is || j != js); }
Up.push_back(points[i]); return sqrt(maxd); };
} } int main() {
if(i == points.size()-1 || cross(A, points[i], int n;
B) < 0){ cin >> n;
while(Down.size() > 2 && vector<Point> points(n);
cross(Down[Down.size()-2], 4.3 Pick Theorem for (auto &p : points) { cin >> p; }
Down[Down.size()-1], points[i]) >= 0) points.push_back(points[0]);
Down.pop_back();
Down.push_back(points[i]); struct point{ // Already rotated in clockwise
} ll x, y; long long area = 0;
} }; for (int i = 0; i < points.size(); i++) {
for(int i = 0; i < Up.size(); i++) area +=
convex.push_back(Up[i]); //Pick: S = I + B/2 - 1 (1LL * points[i].x * points[i + 1].y
for(int i = Down.size()-2; i > 0; i--) - 1LL * points[i].y * points[i +
convex.push_back(Down[i]); ld polygonArea(vector <point> &points){ 1].x);
return convex; int n = (int)points.size(); }
} ld area = 0.0; cout << labs(area) << ’\n’;
int j = n-1; }
int dist(point A, point B){ for(int i = 0; i < n; i++){
return (A.x - B.x)*(A.x - B.x) + (A.y - B.y)*(A.y - area += (points[j].x + points[i].x) *
B.y); (points[j].y - points[i].y);
} j = i; 4.5 Square
}
double findConvexDiameter(vector <point> convexHull){
int n = convexHull.size(); return abs(area/2.0);
} typedef long double ld;
int is = 0, js = 0;
for(int i = 1; i < n; i++){ ll boundary(vector <point> points){ const ld eps = 1e-12;
if(convexHull[i].y > convexHull[is].y) int n = (int)points.size(); int cmp(ld x, ld y = 0, ld tol = eps) {
is = i; ll num_bound = 0; return ( x <= y + tol) ? (x + tol < y) ? -1 : 0 : 1;
if(convexHull[js].y > convexHull[i].y) for(int i = 0; i < n; i++){ }
js = i; ll dx = (points[i].x - points[(i+1)%n].x);
} ll dy = (points[i].y - points[(i+1)%n].y); struct point{
num_bound += abs(__gcd(dx, dy)) - 1; ld x, y;
int maxd = dist(convexHull[is], convexHull[js]); } point(ld a, ld b) : x(a), y(b) {}
return num_bound; point() {}
PTIT.Nutriboost 15

}; (cmp(s1.x2, s2.x1) != -1 && cmp(s1.x2, s2.x2) !=


1))
struct square{ return true; abc
ld x1, x2, y1, y2, return false; cR = p
(a + b + c)(a + b − c)(a + c − b)(b + c − a)
a, b, c; }
point edges[4];
square(ld _a, ld _b, ld _c) { ld min_dist(square &s1, square &s2) {
a = _a, b = _b, c = _c; if (inside(s1, s2) || inside(s2, s1))
5 Graphs
x1 = a - c * 0.5; return 0;
x2 = a + c * 0.5; 5.1 Bridges
y1 = b - c * 0.5; ld ans = 1e100;
y2 = b + c * 0.5; for (int i = 0; i < 4; ++i)
edges[0] = point(x1, y1); for (int j = 0; j < 4; ++j) struct Graph {
edges[1] = point(x2, y1); ans = min(ans, min_dist(s1.edges[i], vector<vector<Edge>> g;
edges[2] = point(x2, y2); s2.edges[j])); vector<int> vi, low, d, pi, is_b; // vi = visited
edges[3] = point(x1, y2); int bridges_computed;
} int ticks, edges;
}; if (inside_hori(s1, s2) || inside_hori(s2, s1)) {
if (cmp(s1.y1, s2.y2) != -1) Graph(int n, int m) {
ld min_dist(point &a, point &b) { ans = min(ans, s1.y1 - s2.y2); g.assign(n, vector<Edge>());
ld x = a.x - b.x, else id_b.assign(m, 0);
y = a.y - b.y; if (cmp(s2.y1, s1.y2) != -1) vi.resize(n);
return sqrt(x * x + y * y); ans = min(ans, s2.y1 - s1.y2); low.resize(n);
} } d.resize(n);
pi.resize(n);
bool point_in_box(square s1, point p) { if (inside_vert(s1, s2) || inside_vert(s2, s1)) { edges = 0;
if (cmp(s1.x1, p.x) != 1 && cmp(s1.x2, p.x) != -1 && if (cmp(s1.x1, s2.x2) != -1) bridges_computed = 0;
cmp(s1.y1, p.y) != 1 && cmp(s1.y2, p.y) != -1) ans = min(ans, s1.x1 - s2.x2); }
return true; else
return false; if (cmp(s2.x1, s1.x2) != -1) void addEge(int u, int v) {
} ans = min(ans, s2.x1 - s1.x2); g[u].push_back(Edge(v, edges));
} g[v].push_back(Edge(u, edges));
bool inside(square &s1, square &s2) { edges++;
for (int i = 0; i < 4; ++i) return ans; }
if (point_in_box(s2, s1.edges[i])) }
return true; void dfs(int u) {
vi[u] = true;
return false; d[u] = low[u] = ticks++;
} for (int i = 0; i < g[u].size(); i++) {
4.6 Triangle int v = g[u][i].to;
bool inside_vert(square &s1, square &s2) { if (v == pi[u]) continue;
if ((cmp(s1.y1, s2.y1) != -1 && cmp(s1.y1, s2.y2) != Let a, b, c be length of the three sides of a triangle. if (!vi[v]) {
1) || pi[v] = u;
(cmp(s1.y2, s2.y1) != -1 && cmp(s1.y2, s2.y2) != dfs(v);
1)) p = (a + b + c) ∗ 0.5 if(d[u] < low[v]) is_b[g[u][i].id] =
return true; true;
return false; The inradius is defined by: low[u] = min(low[u], low[v]);
} } else {
s low[u] = min(low[u], low[v]);
bool inside_hori(square &s1, square &s2) { (p − a)(p − b)(p − c) }
if ((cmp(s1.x1, s2.x1) != -1 && cmp(s1.x1, s2.x2) != iR = }
p }
1) ||
The radius of its circumcircle is given by the formula: // multiple edges from a to b are not allowerd.
PTIT.Nutriboost 16

// (they could be detected as a bridge). priority_queue<edge> q; seen[r] = r;


// if we need to handle this, just count how many q.push(edge(start, 0)); vector<Edge> Q(n), in(n, {-1,-1}), comp;
edges there are from a to b. while (!q.empty()) { deque<tuple<int, int, vector<Edge>>> cycs;
void compBridges() { int node = q.top().to; rep(s,0,n) {
fill(pi.begin(), pi.end(), -1); long long dist = q.top().w; int u = s, qi = 0, w;
fill(vi.begin(), vi.end(), false); q.pop(); while (seen[u] < 0) {
fill(d.begin(), d.end(), 0); if (dist > d[node]) continue; if (!heap[u]) return {-1,{}};
fill(low.begin(), low.end(), 0); for (int i = 0; i < g[node].size(); i++) { Edge e = heap[u]->top();
ticks = 0; int to = g[node][i].to; heap[u]->delta -= e.w,
for (int i = 0; i < g.size(); i++) long long w_extra = g[node][i].w; pop(heap[u]);
if (!vi[i]) dfs(i); if (dist + w_extra < d[to]) { Q[qi] = e, path[qi++] = u,
bridges_computed = 1; p[to] = node; seen[u] = s;
} d[to] = dist + w_extra; res += e.w, u = uf.find(e.a);
q.push(edge(to, d[to])); if (seen[u] == s) { /// found
map<int, vector<Edge>> bridgesTree() { } cycle, contract
if (!bridges_computed) compBridges(); } Node* cyc = 0;
int n = g.size(); } int end = qi, time =
Dsu dsu(n); return {p, d}; uf.time();
for (int i = 0; i < n; i++) } do cyc = merge(cyc, heap[w
for (auto e : g[i]) = path[--qi]]);
if (!is_b[e.id]) dsu.Join(i, e.to); while (uf.join(u, w));
map<int. vector<Edge>> tree; u = uf.find(u), heap[u] =
for (int i = 0; i < n; i++) 5.3 Directed MST cyc, seen[u] = -1;
for (auto e : g[i]) cycs.push_front({u, time,
if (is_b[e.id]) {&Q[qi], &Q[end]}});
tree[dsu.Find(i)].emplace_back(dsu.Find(e.to), struct Edge { int a, b; ll w; }; }
e.id); struct Node { /// lazy skew heap node }
return tree; Edge key; rep(i,0,qi) in[uf.find(Q[i].b)] = Q[i];
} Node *l, *r; }
}; ll delta;
void prop() { for (auto& [u,t,comp] : cycs) { // restore sol
key.w += delta; (optional)
if (l) l->delta += delta; uf.rollback(t);
5.2 Dijkstra if (r) r->delta += delta; Edge inEdge = in[u];
delta = 0; for (auto& e : comp) in[uf.find(e.b)] =
} e;
struct edge { Edge top() { prop(); return key; } in[uf.find(inEdge.b)] = inEdge;
int to; }; }
long long w; Node *merge(Node *a, Node *b) { rep(i,0,n) par[i] = in[i].a;
edge() {} if (!a || !b) return a ?: b; return {res, par};
edge(int a, long long b) : to(a), w(b) {} a->prop(), b->prop(); }
bool operator<(const edge &e) const { if (a->key.w > b->key.w) swap(a, b);
return w > e.w; swap(a->l, (a->r = merge(b, a->r)));
} return a;
}; } 5.4 Edge Coloring
void pop(Node*& a) { a->prop(); a = merge(a->l, a->r); }
typedef <vector<vector<edge>> graph;
const long long inf = 1000000LL * 10000000LL; pair<ll, vi> dmst(int n, int r, vector<Edge>& g) { vi edgeColoring(int N, vector<pii> eds) {
pair<vector<int>, vector<long long>> dijkstra(graph& g, RollbackUF uf(n); vi cc(N + 1), ret(sz(eds)), fan(N), free(N),
int start) { vector<Node*> heap(n); loc;
int n = g.size(); for (Edge e : g) heap[e.b] = merge(heap[e.b], for (pii e : eds) ++cc[e.first], ++cc[e.second];
vector<long long> d(n, inf); new Node{e}); int u, v, ncols = *max_element(all(cc)) + 1;
vector<int> p(n, -1); ll res = 0; vector<vi> adj(N, vi(ncols, -1));
d[start] = 0; vi seen(n, -1), path(n), par(n); for (pii e : eds) {
PTIT.Nutriboost 17

tie(u, v) = e; } auto newDist = max(m[i][k] +


fan[0] = v; m[k][j], -inf);
loc.assign(ncols, 0); void dfs(int u) m[i][j] = min(m[i][j], newDist);
int at = u, end = u, d, c = free[u], ind { }
= 0, i = 0; while(g[u].size()) rep(k,0,n) if (m[k][k] < 0) rep(i,0,n)
while (d = free[v], !loc[d] && (v = { rep(j,0,n)
adj[u][d]) != -1) int v = g[u].back(); if (m[i][k] != inf && m[k][j] != inf)
loc[d] = ++ind, cc[ind] = d, g[u].pop_back(); m[i][j] = -inf;
fan[ind] = v; dfs(v); }
cc[loc[d]] = c; }
for (int cd = d; at != -1; cd ^= c ^ d, path.push_back(u);
at = adj[at][cd]) }
swap(adj[at][cd], adj[end = 5.7 Ford - Bellman
at][cd ^ c ^ d]); bool getPath(){
while (adj[fan[i]][d] != -1) { int ctEdges = 0;
int left = fan[i], right = vector<int> outDeg, inDeg; const ll inf = LLONG_MAX;
fan[++i], e = cc[i]; outDeg = inDeg = vector<int> (n + 1, 0); struct Ed { int a, b, w, s() { return a < b ? a : -a;
adj[u][e] = left; for(int i = 1; i <= n; i++) }};
adj[left][e] = u; { struct Node { ll dist = inf; int prev = -1; };
adj[right][e] = -1; ctEdges += g[i].size();
free[right] = e; outDeg[i] += g[i].size(); void bellmanFord(vector<Node>& nodes, vector<Ed>& eds,
} for(auto &u:g[i]) int s) {
adj[u][d] = fan[i]; inDeg[u]++; nodes[s].dist = 0;
adj[fan[i]][d] = u; } sort(all(eds), [](Ed a, Ed b) { return a.s() <
for (int y : {fan[0], u, end}) int ctMiddle = 0, src = 1; b.s(); });
for (int& z = free[y] = 0; for(int i = 1; i <= n; i++)
adj[y][z] != -1; z++); { int lim = sz(nodes) / 2 + 2; // /3+100 with
} if(abs(inDeg[i] - outDeg[i]) > 1) shuffled vertices
rep(i,0,sz(eds)) return 0; rep(i,0,lim) for (Ed ed : eds) {
for (tie(u, v) = eds[i]; adj[u][ret[i]] if(inDeg[i] == outDeg[i]) Node cur = nodes[ed.a], &dest =
!= v;) ++ret[i]; ctMiddle++; nodes[ed.b];
return ret; if(outDeg[i] > inDeg[i]) if (abs(cur.dist) == inf) continue;
} src = i; ll d = cur.dist + ed.w;
} if (d < dest.dist) {
if(ctMiddle != n && ctMiddle + 2 != n) dest.prev = ed.a;
return 0; dest.dist = (i < lim-1 ? d :
5.5 Eulerian Path dfs(src); -inf);
reverse(path.begin(), path.end()); }
return (path.size() == ctEdges + 1); }
struct DirectedEulerPath } rep(i,0,lim) for (Ed e : eds) {
{ }; if (nodes[e.a].dist == -inf)
int n; nodes[e.b].dist = -inf;
vector<vector<int> > g; }
vector<int> path; }

void init(int _n){ 5.6 Floyd - Warshall


n = _n;
g = vector<vector<int> > (n + 1, 5.8 Gomory Hu
vector<int> ()); const ll inf = 1LL << 62;
path.clear(); void floydWarshall(vector<vector<ll>>& m) {
} int n = sz(m); #include "PushRelabel.cpp"
rep(i,0,n) m[i][i] = min(m[i][i], 0LL);
void add_edge(int u, int v){ rep(k,0,n) rep(i,0,n) rep(j,0,n) typedef array<ll, 3> Edge;
g[u].push_back(v); if (m[i][k] != inf && m[k][j] != inf) { vector<Edge> gomoryHu(int N, vector<Edge> ed) {
PTIT.Nutriboost 18

vector<Edge> tree; int pa[MAX];


vi par(N); for (int k = 1; k <= n; ++k) for (int u = 0; u < n; int timer = 0;
rep(i,1,N) { ++u) { void dfs(int u, int p) {
PushRelabel D(N); // Dinic also works if (d[u][k - 1] == INT_MAX) continue; tree[++timer] = u;
for (Edge t : ed) D.addEdge(t[0], t[1], for (int i = g[u].size() - 1; i >= 0; --i) st[u] = timer;
t[2], t[2]); d[g[u][i].v][k] = min(d[g[u][i].v][k], d[u][k - dep[u] = dep[p] + 1;
tree.push_back({i, par[i], D.calc(i, 1] + g[u][i].w); pa[u] = p;
par[i])}); } for (int v : adj[u]) {
rep(j,i+1,N) if (v ^ p) {
if (par[j] == par[i] && bool flag = true; dfs(v, u);
D.leftOfMinCut(j)) par[j] = tree[++timer] = u;
i; for (int i = 0; i < n && flag; ++i) }
} if (d[i][n] != INT_MAX) }
return tree; flag = false; }
} pii up[LOG][MAX << 1];
if (flag) { int lg[MAX << 1];
return true; // return true if there is no a cycle. void buildRMQ() {
} lg[1] = 0;
5.9 Karp Min Mean Cycle for (int i = 2; i <= timer; ++i)
double ans = 1e15; lg[i] = lg[i / 2] + 1;
/** for (int u = 0; u + 1 < n; ++u) { for (int i = 1; i <= timer; ++i)
* Finds the min mean cycle, if you need the max mean if (d[u][n] == INT_MAX) continue; up[0][i] = make_pair(dep[tree[i]], tree[i]);
cycle double W = -1e15;
* just add all the edges with negative cost and print for (int k = 1; k < LOG; ++k) {
* ans * -1 for (int k = 0; k < n; ++k) int step = 1 << (k - 1);
* if (d[u][k] != INT_MAX) for (int i = 1; i + step <= timer; ++i)
* test: uva, 11090 - Going in Cycle!! W = max(W, (double)(d[u][n] - d[u][k]) / (n - up[k][i] = min(up[k - 1][i], up[k - 1][i +
* */ k)); step]);
}
const int MN = 1000; ans = min(ans, W); }
struct edge{ } int getLCA(int u, int v) {
int v; int l = st[u], r = st[v];
long long w; // printf("%.2lf\n", ans); if (l > r) swap(l, r);
edge(){} edge(int v, int w) : v(v), w(w) {} cout << fixed << setprecision(2) << ans << endl; int k = lg[r - l + 1];
}; pii ans = min(up[k][l], up[k][r - (1 << k) + 1]);
return false; return ans.se;
long long d[MN][MN]; } }
// This is a copy of g because increments the size // LCA - Euler Tour: O(NlogN) build and O(1) Query
// pass as reference if this does not matter.
int karp(vector<vector<edge> > g) {
int n = g.size(); 5.10 Konig’s Theorem
5.12 LCA
g.resize(n + 1); // this is important In any bipartite graph, the number of edges in a maximum
matching equals the number of vertices in a minimum vertex
for (int i = 0; i < n; ++i) #include "../Data Structures/RMQ.h"
if (!g[i].empty())
cover
g[n].push_back(edge(i,0)); struct LCA {
++n; 5.11 LCA Euler Tour int T = 0;
vi time, path, ret;
for(int i = 0;i<n;++i) RMQ<int> rmq;
fill(d[i],d[i]+(n+1),INT_MAX); int n, q;
vector <int> adj[MAX]; LCA(vector<vi>& C) : time(sz(C)),
d[n - 1][0] = 0; int dep[MAX], st[MAX], tree[MAX << 1]; rmq((dfs(C,0,-1), ret)) {}
PTIT.Nutriboost 19

void dfs(vector<vi>& C, int v, int par) { active[ps[i].x] = i; cardinality bipartite mathching in G’.
time[v] = T++; }
for (int y : C[v]) if (y != par) { for (auto &p : ps) { // rotate
Therefore, the problem can be solved by finding the
path.push_back(v), if (rot & 1) p.x *= -1;
ret.push_back(time[v]); else swap(p.x, p.y); maximum cardinality matching in G’ instead.
dfs(C, y, v); } NOTE: If the paths are note necesarily disjoints, find
} } the transitive closure and solve the problem for disjoint
} return edges; paths.
}
int lca(int a, int b) {
if (a == b) return a; 5.16 Planar Graph (Euler)
tie(a, b) = minmax(time[a], time[b]);
return path[rmq.query(a, b)]; 5.14 Math Euler’s formula states that if a finite, connected, planar
} graph is drawn in the plane without any edge intersections,
//dist(a,b){return depth[a] + depth[b] - Number of Spanning Trees
and v is the number of vertices, e is the number of edges
2*depth[lca(a,b)];} Create an N × N matrix mat, and for each edge a →
}; and f is the number of faces (regions bounded by edges,
b ∈ G, do mat[a][b]--, mat[b][b]++ (and mat[b][a]--,
including the outer, infinitely large region), then:
mat[a][a]++ if G is undirected). Remove the ith row and
column and take the determinant; this yields the number
5.13 Manhattan MST f +v =e+2
of directed spanning trees rooted at i (if G is undirected,
remove any row/column). It can be extended to non connected planar graphs with
struct point { Erdős–Gallai theorem c connected components:
long long x, y; A simple graph with node degrees d1 ≥ · · · ≥ dn exists iff
}; d1 + · · · + dn is even and for every k = 1 . . . n, f +v =e+c+1
// Returns a list of edges in the format (weight, u, v). k
X n
X
// Passing this list to Kruskal algorithm will give the di ≤ k(k − 1) + min(di , k). 5.17 Push Relabel
Manhattan MST. i=1 i=k+1
vector<tuple<long long, int, int>>
manhattan_mst_edges(vector<point> ps) { struct PushRelabel {
vector<int> ids(ps.size()); 5.15 Minimum Path Cover in DAG struct Edge {
iota(ids.begin(), ids.end(), 0); int dest, back;
vector<tuple<long long, int, int>> edges; Given a directed acyclic graph G = (V, E), we are to find ll f, c;
for (int rot = 0; rot < 4; rot++) { // for every the minimum number of vertex-disjoint paths to cover each };
rotation vertex in V. vector<vector<Edge>> g;
sort(ids.begin(), ids.end(), [&](int i, int j){ vector<ll> ec;
return (ps[i].x + ps[i].y) < (ps[j].x + We can construct a bipartite graph G′ = (V out ∪ vector<Edge*> cur;
ps[j].y); V in, E ′ ) from G, where : vector<vi> hs; vi H;
}); PushRelabel(int n) : g(n), ec(n), cur(n),
map<int, int, greater<int>> active; // (xs, id) hs(2*n), H(n) {}
for (auto i : ids) {
for (auto it = active.lower_bound(ps[i].x); V out = {v ∈ V : v has positive out − degree} void addEdge(int s, int t, ll cap, ll rcap=0) {
it != active.end(); if (s == t) return;
active.erase(it++)) { g[s].push_back({t, sz(g[t]), 0, cap});
int j = it->second; V in = {v ∈ V : v has positive in − degree} g[t].push_back({s, sz(g[s])-1, 0, rcap});
if (ps[i].x - ps[i].y > ps[j].x - }
ps[j].y) break;
E ′ = {(u, v) ∈ V out × V in : (u, v) ∈ E}
assert(ps[i].x >= ps[j].x && ps[i].y >= Then it can be shown, via König’s theorem, that G’ void addFlow(Edge& e, ll f) {
ps[j].y); Edge &back = g[e.dest][e.back];
edges.push_back({(ps[i].x - ps[j].x) + has a matching of size m if and only if there exists n − m if (!ec[e.dest] && f)
(ps[i].y - ps[j].y), i, j}); vertex-disjoint paths that cover each vertex in G, where hs[H[e.dest]].push_back(e.dest);
} n is the number of vertices in G and m is the maximum e.f += f; e.c -= f; ec[e.dest] += f;
PTIT.Nutriboost 20

back.f -= f; back.c += f; ec[back.dest] 5.18 SCC Kosaraju vector<vector<int>> CondensedGraph() {


-= f; vector<vector<int>> ans(total_components);
} for (int i = 0; i < int(g.size()); i++) {
ll calc(int s, int t) { for (int to : g[i]) {
int v = sz(g); H[s] = v; ec[t] = 1; // SCC = Strongly Connected Components int u = component[i], v = component[to];
vi co(2*v); co[0] = v-1; if (u != v)
rep(i,0,v) cur[i] = g[i].data(); struct SCC { ans[u].push_back(v);
for (Edge& e : g[s]) addFlow(e, e.c); vector<vector<int>> g, gr; }
vector<bool> used; }
for (int hi = 0;;) { vector<int> order, component; return ans;
while (hs[hi].empty()) if (!hi--) int total_components; }
return -ec[s]; };
int u = hs[hi].back(); SCC(vector<vector<int>>& adj) {
hs[hi].pop_back(); g = adj;
while (ec[u] > 0) // discharge u int n = g.size();
if (cur[u] == g[u].data() gr.resize(n);
+ sz(g[u])) { for (int i = 0; i < n; i++) 5.19 Tarjan SCC
H[u] = 1e9; for (auto to : g[i])
for (Edge& e : gr[to].push_back(i);
g[u]) if (e.c const int N = 20002;
&& H[u] > used.assign(n, false); struct tarjan_scc {
H[e.dest]+1) for (int i = 0; i < n; i++) int scc[MN], low[MN], d[MN], stacked[MN];
H[u] = if (!used[i]) int ticks, current_scc;
H[e.dest]+1, GenTime(i); deque<int> s; // used as stack
cur[u] tarjan_scc() {}
= &e; used.assign(n, false); void init() {
if (++co[H[u]], component.assign(n, -1); memset(scc, -1, sizeof(scc));
!--co[hi] && total_components = 0; memset(d, -1, sizeof(d));
hi < v) for (int i = n - 1; i >= 0; i--) { memset(stacked, 0, sizeof(stacked));
rep(i,0,v) int v = order[i]; s.clear();
if (hi if (!used[v]) { ticks = current_scc = 0;
< H[i] vector<int> cur_component; }
&& H[i] Dfs(cur_component, v); void compute(vector<vector<int>> &g, int u) {
< v) for (auto node : cur_component) d[u] = low[u] = ticks++;
--co[H[i]], component[node] = total_components; s.push_back(u);
H[i] } stacked[u] = true;
= } for (int i = 0; i < g[u].size(); i++) {
v } int v = g[u][i];
+ if (d[v] == -1) compute(g, v);
1; void GenTime(int node) { if (stacked[v]) low[u] = min(low[u], low[v]);
hi = H[u]; used[node] = true; }
} else if (cur[u]->c && for (auto to : g[node]) if (d[u] == low[u]) {
H[u] == if (!used[to]) int v;
H[cur[u]->dest]+1) GenTime(to); do {
addFlow(*cur[u], order.push_back(node); v = s.back(); s.pop_back();
min(ec[u], } stacked[v] = false;
cur[u]->c)); scc[v] = current_scc;
else ++cur[u]; void Dfs(vector<int>& cur, int node) { } while (u != v);
} used[node] = true; current_scc++;
} cur.push_back(node); }
bool leftOfMinCut(int a) { return H[a] >= if (!used[to]) }
sz(g); } Dfs(cur, to); };
}; }
PTIT.Nutriboost 21

5.20 Topological Sort if (dep[u] < dep[v]) swap(u, v); int tot; // total special vertices
int d = dep[u] - dep[v]; ll ans;
for (int i = K - 1; i >= 0; --i) void solve(int u, int p) {
vi topoSort(const vector<vi>& gr) { if (d & (1 << i)) for (int v : adj_vt[u]) {
vi indeg(sz(gr)), ret; u = up[i][u]; if (v == p) continue;
for (auto& li : gr) for (int x : li) indeg[x]++; } solve(v, u);
queue<int> q; // use priority_queue for lexic. if (u == v) return u; sz[u] = (sz[u] + sz[v]) % MOD;
largest ans. for (int i = K - 1; i >= 0; --i) { }
rep(i,0,sz(gr)) if (indeg[i] == 0) q.push(i); if (up[i][u] != up[i][v]) {
while (!q.empty()) { u = up[i][u]; for (int v : adj_vt[u]) {
int i = q.front(); // top() for priority v = up[i][v]; if (v == p) continue;
queue } int w = dep[v] - dep[u];
ret.push_back(i); } int mul = 1LL * sz[v] * (tot - sz[v] + MOD) %
q.pop(); return up[0][u]; MOD;
for (int x : gr[i]) } ans += 1LL * w * mul % MOD;
if (--indeg[x] == 0) q.push(x); ans %= MOD;
} bool inside(int u, int v) { }
return ret; return st[u] <= st[v] && en[v] <= en[u]; }
} }
/// signed main() {
vector <int> adj_vt[N]; cin.tie(0) -> sync_with_stdio(0);
int vt_root(vector <int> &ver) {
5.21 Virtual Tree sort(ver.begin(), ver.end(), [&] (const int& x, #ifdef JASPER
const int& y) { freopen("in1", "r", stdin);
return st[x] < st[y]; #endif
/* });
Used to solve problem with set of vertices int m = ver.size(); int n, q;
https://www.hackerrank.com/contests/hourrank-15/challenges/kittys-calculations-on-a-tree
for (int i = 0; i + 1 < m; ++i) { cin >> n >> q;
*/ int new_ver = lca(ver[i], ver[i + 1]);
ver.push_back(new_ver); for (int i = 1; i < n; ++i) {
const int MOD = 1e9 + 7; } int u, v;
const int N = 2e5 + 5; sort(ver.begin(), ver.end(), [&] (const int& x, cin >> u >> v;
const int K = 18; const int& y) { adj[u].push_back(v);
return st[x] < st[y]; adj[v].push_back(u);
vector <int> adj[N]; }); }
int st[N], en[N], dep[N]; ver.resize(unique(ver.begin(), ver.end()) -
int up[K][N]; ver.begin()); dfs(1, 0);
int timer = 0;
stack <int> stk; for (int _q = 1; _q <= q; ++_q) {
// LCA stk.push(ver[0]); int k;
void dfs(int u, int p) { m = ver.size(); cin >> k;
st[u] = ++timer; for (int i = 1; i < m; ++i) {
for (int v : adj[u]) { int u = ver[i]; vector <int> ver;
if (v == p) continue; while (!stk.empty() && !inside(stk.top(), u)) tot = 0;
dep[v] = dep[u] + 1; // check if v is in u’s subtree while (k--) {
up[0][v] = u; stk.pop(); int x; cin >> x;
for (int i = 1; i < K; ++i) adj_vt[stk.top()].push_back(u); sz[x] = x;
up[i][v] = up[i - 1][up[i - 1][v]]; stk.push(u); tot = (tot + x) % MOD;
dfs(v, u); } ver.push_back(x);
} return ver[0]; }
en[u] = timer; }
} int rt = vt_root(ver);
int lca(int u, int v) { int sz[N]; solve(rt, 0);
if (dep[u] != dep[v]) {
PTIT.Nutriboost 22

cout << ans << "\n"; A[i].swap(A[r]); tmp[i].swap(tmp[r]); double m = (l + h) / 2, f


rep(j,0,n) = p(m);
for (int x : ver) { swap(A[j][i], A[j][c]), if ((f <= 0) ^ sign) l = m;
sz[x] = 0; swap(tmp[j][i], tmp[j][c]); else h = m;
adj_vt[x].clear(); swap(col[i], col[c]); }
} double v = A[i][i]; ret.push_back((l + h) / 2);
ans = 0; rep(j,i+1,n) { }
} double f = A[j][i] / v; }
return 0; A[j][i] = 0; return ret;
} rep(k,i+1,n) A[j][k] -= f*A[i][k]; }
rep(k,0,n) tmp[j][k] -=
f*tmp[i][k];
}
rep(j,i+1,n) A[i][j] /= v;
6 Linear Algebra rep(j,0,n) tmp[i][j] /= v;
A[i][i] = 1; 6.4 Polynomial
6.1 Matrix Determinant }

/// forget A at this point, just eliminate tmp struct Poly {


double det(vector<vector<double>>& a) { backward
int n = sz(a); double res = 1; vector<double> a;
for (int i = n-1; i > 0; --i) rep(j,0,i) { double operator()(double x) const {
rep(i,0,n) { double v = A[j][i];
int b = i; double val = 0;
rep(k,0,n) tmp[j][k] -= v*tmp[i][k]; for (int i = sz(a); i--;) (val *= x) +=
rep(j,i+1,n) if (fabs(a[j][i]) > }
fabs(a[b][i])) b = j; a[i];
if (i != b) swap(a[i], a[b]), res *= -1; return val;
rep(i,0,n) rep(j,0,n) A[col[i]][col[j]] = }
res *= a[i][i]; tmp[i][j];
if (res == 0) return 0; void diff() {
return n; rep(i,1,sz(a)) a[i-1] = i*a[i];
rep(j,i+1,n) { }
double v = a[j][i] / a[i][i]; a.pop_back();
if (v != 0) rep(k,i+1,n) a[j][k] }
-= v * a[i][k]; void divroot(double x0) {
} double b = a.back(), c; a.back() = 0;
for(int i=sz(a)-1; i--;) c = a[i], a[i]
} 6.3 PolyRoots = a[i+1]*x0+b, b=c;
return res;
} a.pop_back();
#include "Polynomial.cpp" }
};
vector<double> polyRoots(Poly p, double xmin, double
6.2 Matrix Inverse xmax) {
if (sz(p.a) == 2) { return {-p.a[0]/p.a[1]}; }
vector<double> ret;
int matInv(vector<vector<double>>& A) { Poly der = p;
int n = sz(A); vi col(n); der.diff(); 7 Maths
vector<vector<double>> tmp(n, auto dr = polyRoots(der, xmin, xmax);
vector<double>(n)); dr.push_back(xmin-1);
rep(i,0,n) tmp[i][i] = 1, col[i] = i; dr.push_back(xmax+1); 7.1 Factorial Approximate
sort(all(dr));
rep(i,0,n) { rep(i,0,sz(dr)-1) {
int r = i, c = i; double l = dr[i], h = dr[i+1]; Approximate Factorial:
rep(j,i,n) rep(k,i,n) bool sign = p(l) > 0;
if (fabs(A[j][k]) > fabs(A[r][c])) if (sign ^ (p(h) > 0)) {
r = j, c = k; rep(it,0,60) { // while (h - l > √ n
if (fabs(A[r][c]) < 1e-12) return i; 1e-8) n! = 2.π.n.( )n (1)
e
PTIT.Nutriboost 23

7.2 Factorial j ^= k; while (!res.empty() && res.back() == 0)


} res.pop_back();
j ^= k;
n 123 4 5 6 7 8 9 10 if (i < j) swap(a[i], a[j]); return res;
n! 1 2 6 24 120 720 5040 40320 362880 3628800 } }
n 11 12 13 14 15 16 17 };
for (int len = 2; len <= n; len <<= 1) {
n! 4.0e7 4.8e8 6.2e9 8.7e10 1.3e12 2.1e13 3.6e14 double ang = (2.0 * PI / len) * (inv? -1 : 1);
n 20 25 30 40 50 100 150 171 cd wlen(cos(ang), sin(ang));
n! 2e18 2e25 3e32 8e47 3e64 9e157 6e262 >DBL MAX 7.4 General purpose numbers
for (int i = 0; i < n; i += len) {
cd w(1); Bernoulli numbers
7.3 Fast Fourier Transform for (int j = 0; j < len / 2; ++j) { EGF of Bernoulli numbers is B(t) = t
(FFT-able).
et −1
cd u = a[i + j];
cd v = a[i + j + len / 2] * w; B[0, . . .] = [1, − 12 , 61 , 0, − 30
1 1
, 0, 42 , . . .]
// Note: a[i + j] = u + v; Sums of powers:
// - When convert double -> int, use my_round(x) which a[i + j + len / 2] = u - v; n m 
handles negative numbers
X 1 X m + 1
w = w * wlen; nm = Bk · (n + 1)m+1−k
// correctly. } i=1
m + 1 k=0 k
// }
// Tested: } Euler-Maclaurin formula for infinite sums:
// - https://open.kattis.com/problems/polymul2 ∞ Z ∞ ∞
// - https://www.spoj.com/problems/TSUM/
X X Bk (k−1)
if (inv) { f (i) = f (x)dx − f (m)
// - (bigint mul) https://www.spoj.com/problems/VFMUL/ for (cd &x : a) { i=m m k=1
k!
// - (bigint mul) https://www.spoj.com/problems/MUL/ x.a /= n; Z ∞ f (m) f ′ (m) f ′′′ (m)
// - (string matching) x.b /= n; ≈ f (x)dx + − + + O(f (5) (m))
https://www.spoj.com/problems/MAXMATCH } m 2 12 720
// } Stirling numbers of the first kind
// FFT {{{ }
// Source: Number of permutations on n items with k cycles.
https://github.com/kth-competitive-programming/kactl/blob/main/content/numerical/FastFourierTransform.h
vector <ll> fft(vector <ll>& a, vector <ll>& b) {
class FFT { vector <cd> fa(a.begin(), a.end()); c(n, k) = c(n − 1, k − 1) + (n − 1)c(n − 1, k), c(0, 0) = 1
public: vector <cd> fb(b.begin(), b.end());
Pn k
struct cd { k=0 c(n, k)x = x(x + 1) . . . (x + n − 1)
double a, b; int n = 1;
cd(double _a = 0, double _b = 0) : a(_a), b(_b) {} c(8, k) = 8, 0, 5040, 13068, 13132, 6769, 1960, 322, 28, 1
while (n < (int) (fa.size() + fb.size())) n <<= 1;
Stirling numbers of the second kind
const cd operator + (const cd &c) const { return fa.resize(n); Partitions of n distinct elements into exactly k groups.
cd(a + c.a, b + c.b); } fb.resize(n);
const cd operator - (const cd &c) const { return vector <cd> fc(n); S(n, k) = S(n − 1, k − 1) + kS(n − 1, k)
cd(a - c.a, b - c.b); }
const cd operator * (const cd &c) const { return dft(fa, false);
cd(a * c.a - b * c.b, a * c.b + b * c.a); }
S(n, 1) = S(n, n) = 1
dft(fb, false);
}; k
!
1 X k−j k
const double PI = acos(-1); for (int i = 0; i < n; ++i) S(n, k) = (−1) jn
fc[i] = fa[i] * fb[i]; k! j=0 j
void dft(vector <cd>& a, bool inv) {
int n = (int) a.size(); dft(fc, true); Eulerian numbers
if (n == 1) Number of permutations π ∈ Sn in which exactly k el-
return; vector <ll> res(n); ements are greater than the previous element. k j:s s.t.
for (int i = 0; i < n; ++i)
for (int i = 1, j = 0; i < n; ++i) { res[i] = 1LL * (round(fc[i].a) > 0.5); π(j) > π(j + 1), k + 1 j:s s.t. π(j) ≥ j, k j:s s.t. π(j) > j.
int k = n >> 1;
for (; j & k; k >>= 1) { E(n, k) = (n − k)E(n − 1, k − 1) + (k + 1)E(n − 1, k)
PTIT.Nutriboost 24

E(n, 0) = E(n, n − 1) = 1 7.6 Multinomial while(lim < p) {


k
! double angle = 2 * PI / (1 << (lim + 1));
j n+1 for(int i = 1 << (lim - 1); i < (1 << lim); i++) {
X
E(n, k) = (−1) (k + 1 − j)n /** roots[i << 1] = roots[i];
j
j=0 * Description: Computes $\displaystyle \binom{k_1 + double angle_i = angle * (2 * i + 1 - (1 << lim));
Bell numbers \dots + k_n}{k_1, k_2, \dots, k_n} = \frac{(\sum roots[(i << 1) + 1] = base(cos(angle_i),
k_i)!}{k_1!k_2!...k_n!}$. sin(angle_i));
Total number of partitions of n distinct elements. B(n) = * Status: Tested on kattis:lexicography }
1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, . . . . For p prime, */ lim++;
#pragma once }
B(pm + n) ≡ mB(n) + B(n + 1) (mod p) }
long long multinomial(vector<int>& v) { void fft(vector<base> &a, int n = -1) {
Labeled unrooted trees long long c = 1, m = v.empty() ? 1 : v[0]; if(n == -1) n = a.size();
# on n vertices: nn−2 for (long long i = 1; i < v.size(); i++) { assert((n & (n - 1)) == 0);
# on k existing trees of size ni : n1 n2 · · · nk nk−2 for (long long j = 0; j < v[i]; j++) { int zeros = __builtin_ctz(n);
# with degrees di : (n − 2)!/((d1 − 1)! · · · (dn − 1)!) c = c * ++m / (j + 1); ensure_base(zeros);
} int shift = lim - zeros;
Catalan numbers }
! ! ! for(int i = 0; i < n; i++) if(i < (rev[i] >> shift))
return c; swap(a[i], a[rev[i] >> shift]);
1 2n 2n 2n (2n)!
Cn = = − = } for(int k = 1; k < n; k <<= 1) {
n+1 n n n+1 (n + 1)!n! for(int i = 0; i < n; i += 2 * k) {
for(int j = 0; j < k; j++) {
2(2n + 1) X base z = a[i + j + k] * roots[j + k];
C0 = 1, Cn+1 = Cn , Cn+1 = Ci Cn−i 7.7 Number Theoretic Transform
n+2 a[i + j + k] = a[i + j] - z;
a[i + j] = a[i + j] + z;
Cn = 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, . . .
}
[noitemsep]sub-diagonal monotone paths in an n × n #include<bits/stdc++.h> }
using namespace std; }
grid. strings with n pairs of parenthesis, correctly
}
nested. binary trees with with n+1 leaves (0 or 2 chil- const int N = 3e5 + 9, mod = 998244353; //eq = 0: 4 FFTs in total
dren). ordered trees with n+1 vertices. ways a convex //eq = 1: 3 FFTs in total
polygon with n + 2 sides can be cut into triangles by struct base { vector<int> multiply(vector<int> &a, vector<int> &b,
connecting vertices with straight lines. permutations double x, y; int eq = 0) {
base() { x = y = 0; } int need = a.size() + b.size() - 1;
of [n] with no 3-term increasing subseq. base(double x, double y): x(x), y(y) { } int p = 0;
}; while((1 << p) < need) p++;
7.5 Lucas Theorem inline base operator + (base a, base b) { return ensure_base(p);
base(a.x + b.x, a.y + b.y); } int sz = 1 << p;
For non-negative integers m and n and a prime p, the fol- inline base operator - (base a, base b) { return vector<base> A, B;
lowing congruence relation holds: : base(a.x - b.x, a.y - b.y); } if(sz > (int)A.size()) A.resize(sz);
inline base operator * (base a, base b) { return for(int i = 0; i < (int)a.size(); i++) {
base(a.x * b.x - a.y * b.y, a.x * b.y + a.y *
! k
!
m Y mi int x = (a[i] % mod + mod) % mod;
≡ (mod p), b.x); } A[i] = base(x & ((1 << 15) - 1), x >> 15);
n i=0
ni inline base conj(base a) { return base(a.x, -a.y); } }
int lim = 1; fill(A.begin() + a.size(), A.begin() + sz, base{0,
where : vector<base> roots = {{0, 0}, {1, 0}}; 0});
vector<int> rev = {0, 1}; fft(A, sz);
m = mk pk + mk−1 pk−1 + · · · + m1 p + m0 , const double PI = acosl(- 1.0); if(sz > (int)B.size()) B.resize(sz);
void ensure_base(int p) { if(eq) copy(A.begin(), A.begin() + sz, B.begin());
and : if(p <= lim) return; else {
n = nk pk + nk−1 pk−1 + · · · + n1 p + n0 rev.resize(1 << p); for(int i = 0; i < (int)b.size(); i++) {
for(int i = 0; i < (1 << p); i++) rev[i] = (rev[i >> int x = (b[i] % mod + mod) % mod;
are the base p expansions
 of m and n respectively. This uses 1] >> 1) + ((i & 1) << (p - 1)); B[i] = base(x & ((1 << 15) - 1), x >> 15);
the convention that m n
= 0 if m ≤ n. roots.resize(1 << p);
PTIT.Nutriboost 25

} } int permToInt(vector<int>& v) {
fill(B.begin() + b.size(), B.begin() + sz, base{0, vector<int> ans = pow(a, n / 2); int use = 0, i = 0, r = 0;
0}); int res = 0; for(int x : v) r = r * ++i +
fft(B, sz); for(auto x: ans) res = (res + 1LL * x * x % mod) % __builtin_popcount(use & -(1<<x)),
} mod; use |= 1 << x; // (note:
double ratio = 0.25 / sz; cout << res << ’\n’; minus, not ~!)
base r2(0, - 1), r3(ratio, 0), r4(0, - ratio), r5(0, return 0; return r;
1); } }
for(int i = 0; i <= (sz >> 1); i++) { //https://codeforces.com/contest/1096/problem/G
int j = (sz - i) & (sz - 1);
base a1 = (A[i] + conj(A[j])), a2 = (A[i] -
conj(A[j])) * r2; 7.10 Sigma Function
base b1 = (B[i] + conj(B[j])) * r3, b2 = (B[i] - 7.8 Others
conj(B[j])) * r4; The Sigma Function is defined as:
if(i != j) { Cycles Let gS (n) be the number of n-permutations whose
base c1 = (A[j] + conj(A[i])), c2 = (A[j] - cycle lengths all belong to the set S. Then σx (n) =
X
dx
conj(A[i])) * r2;

! d|n
base d1 = (B[j] + conj(B[i])) * r3, d2 = (B[j] - X xn X xn
conj(B[i])) * r4; gS (n) = exp when x = 0 is called the divisor function, that counts
A[i] = c1 * d1 + c2 * d2 * r5; n=0
n! n∈S
n
B[i] = c1 * d2 + c2 * d1; the number of positive divisors of n.
} Derangements Permutations of a set such that none Now, we are interested in find
A[j] = a1 * b1 + a2 * b2 * r5; of the elements appear in their original position.
B[j] = a1 * b2 + a2 * b1; X
}   σ0 (d)
n!
fft(A, sz); fft(B, sz); D(n) = (n−1)(D(n−1)+D(n−2)) = nD(n−1)+(−1)n = d|n
vector<int> res(need); e
for(int i = 0; i < need; i++) { If n is written as prime factorization:
long long aa = A[i].x + 0.5; Burnside’s lemma Given a group G of symmetries and
k
long long bb = B[i].x + 0.5; a set X, the number of elements of X up to symmetry equals Y e
long long cc = A[i].y + 0.5; n= Pi k
res[i] = (aa + ((bb % mod) << 15) + ((cc % mod) << 1 X g i=1
|X |,
30))%mod; |G| g∈G We can demonstrate that:
}
return res;
} where X g are the elements fixed by g (g.x = x). X k
Y
If f (n) counts “configurations” (of some sort) of length σ0 (d) = g(ek + 1)
vector<int> pow(vector<int>& a, int p) { n, we can ignore rotational symmetry using G = Zn to get d|n i=1
vector<int> res;
res.emplace_back(1); n−1 where g(x) is the sum of the first x positive numbers:
1X 1X
while(p) { g(n) = f (gcd(n, k)) = f (k)ϕ(n/k).
if(p & 1) res = multiply(res, a); n n
k=0 k|n g(x) = (x ∗ (x + 1))/2
a = multiply(a, a, 1);
p >>= 1;
} 7.9 Permutation To Int
return res; 8 Misc
}
int main() { /** 8.1 Dates
int n, k; cin >> n >> k; * Description: Permutation -> integer conversion. (Not
vector<int> a(10, 0); order preserving.)
while(k--) { * Integer -> permutation can use a lookup table. //
int m; cin >> m; * Time: O(n) // Time - Leap years
a[m] = 1; **/ //
PTIT.Nutriboost 26

// A[i] has the accumulated number of days from months 8.2 Debugging Tricks • for (int x = m; x; ) { --x &= m; ... } loops
previous to i over all subset masks of m (except m itself).
const int A[13] = { 0, 0, 31, 59, 90, 120, 151, 181, • signal(SIGSEGV, [](int) { _Exit(0); }); con-
212, 243, 273, 304, 334 }; verts segfaults into Wrong Answers. Similarly one • c = x&-x, r = x+c; (((r^x) >> 2)/c) | r is the
// same as A, but for a leap year
can catch SIGABRT (assertion failures) and SIGFPE next number after x with the same number of bits
const int B[13] = { 0, 0, 31, 60, 91, 121, 152, 182,
213, 244, 274, 305, 335 }; (zero divisions). _GLIBCXX_DEBUG failures generate set.
// returns number of leap years up to, and including, y SIGABRT (or SIGSEGV on gcc 5.4.0 apparently).
int leap_years(int y) { return y / 4 - y / 100 + y / • rep(b,0,K) rep(i,0,(1 << K))
400; }
• feenableexcept(29); kills the program on NaNs (1), if (i & 1 << b) D[i] += D[i^(1 << b)]; com-
bool is_leap(int y) { return y % 400 == 0 || (y % 4 == putes all sums of subsets.
0 && y % 100 != 0); } 0-divs (4), infinities (8) and denormals (16).
// number of days in blocks of years
const int p400 = 400*365 + leap_years(400); 8.4.2 Pragmas
const int p100 = 100*365 + leap_years(100); 8.3 Interval Container
const int p4 = 4*365 + 1; • #pragma GCC optimize ("Ofast") will make GCC auto-
const int p1 = 365; vectorize loops and optimizes floating points better.
int date_to_days(int d, int m, int y) set<pii>::iterator addInterval(set<pii>& is, int L, int
R) {
{ • #pragma GCC target ("avx2") can double performance
return (y - 1) * 365 + leap_years(y - 1) + if (L == R) return is.end();
auto it = is.lower_bound({L, R}), before = it; of vectorized code, but causes crashes on old machines.
(is_leap(y) ? B[m] : A[m]) + d;
} while (it != is.end() && it->first <= R) {
void days_to_date(int days, int &d, int &m, int &y) R = max(R, it->second); • #pragma GCC optimize ("trapv") kills the program on
{ before = it = is.erase(it); integer overflows (but is really slow).
bool top100; // are we in the top 100 years of a 400 }
block? if (it != is.begin() && (--it)->second >= L) {
bool top4; // are we in the top 4 years of a 100 L = min(L, it->first); 8.5 Ternary Search
block? R = max(R, it->second);
bool top1; // are we in the top year of a 4 block? is.erase(it);
} template<class F>
return is.insert(before, {L,R}); int ternSearch(int a, int b, F f) {
y = 1; assert(a <= b);
top100 = top4 = top1 = false; }
while (b - a >= 5) {
void removeInterval(set<pii>& is, int L, int R) { int mid = (a + b) / 2;
y += ((days-1) / p400) * 400; if (f(mid) < f(mid+1)) a = mid; // (A)
d = (days-1) % p400 + 1; if (L == R) return;
auto it = addInterval(is, L, R); else b = mid+1;
auto r2 = it->second; }
if (d > p100*3) top100 = true, d -= 3*p100, y += 300; rep(i,a+1,b+1) if (f(a) < f(i)) a = i; // (B)
else y += ((d-1) / p100) * 100, d = (d-1) % p100 + 1; if (it->first == L) is.erase(it);
else (int&)it->second = L; return a;
if (R != r2) is.emplace(R, r2); }
if (d > p4*24) top4 = true, d -= 24*p4, y += 24*4;
else y += ((d-1) / p4) * 4, d = (d-1) % p4 + 1; }

if (d > p1*3) top1 = true, d -= p1*3, y += 3;


else y += (d-1) / p1, d = (d-1) % p1 + 1; 9 Number Theory
8.4 Optimization Tricks
const int *ac = top1 && (!top4 || top100) ? B : A; 9.1 Chinese Remainder Theorem
for (m = 1; m < 12; ++m) if (d <= ac[m + 1]) break; __builtin_ia32_ldmxcsr(40896); disables denormals
d -= ac[m];
} (which make floats 20x slower near their minimum value). /**
* Chinese remainder theorem.
* Find z such that z % x[i] = a[i] for all i.
8.4.1 Bit hacks * */
long long crt(vector<long long> &a, vector<long long>
• x & -x is the least bit in x. &x) {
PTIT.Nutriboost 27

long long z = 0; fill(fB.begin(), fB.end(), 0); }


long long n = 1; for (int i = 0; i < a.size(); i++) fA[i] = a[i];
for (int i = 0; i < x.size(); ++i) for (int i = 0; i < b.size(); i++) fB[i] = b[i]; void shift_solution(long long &x, long long &y, long
n *= x[i]; LL prime = nth_roots_unity[times].first; long a, long long b,
LL inv_modulo = mod_inv(modulo % prime, prime); long long cnt) {
for (int i = 0; i < a.size(); ++i) { LL normalize = mod_inv(N, prime); x += cnt * b;
long long tmp = (a[i] * (n / x[i])) % n; ntfft(fA, 1, nth_roots_unity[times]); y -= cnt * a;
tmp = (tmp * mod_inv(n / x[i], x[i])) % n; ntfft(fB, 1, nth_roots_unity[times]); }
z = (z + tmp) % n; for (int i = 0; i < N; i++) fC[i] = (fA[i] * fB[i])
} % prime; long long find_all_solutions(long long a, long long b,
ntfft(fC, -1, nth_roots_unity[times]); long long c,
return (z + n) % n; for (int i = 0; i < N; i++) { long long minx, long long maxx, long long miny,
} LL curr = (fC[i] * normalize) % prime; long long maxy) {
LL k = (curr - (ans[i] % prime) + prime) % prime; long long x, y, g;
k = (k * inv_modulo) % prime; if (!find_any_solution(a, b, c, x, y, g)) return 0;
ans[i] += modulo * k; a /= g;
9.2 Convolution } b /= g;
modulo *= prime;
} long long sign_a = a > 0 ? +1 : -1;
typedef long long int LL; return ans; long long sign_b = b > 0 ? +1 : -1;
typedef pair<LL, LL> PLL; }
shift_solution(x, y, a, b, (minx - x) / b);
inline bool is_pow2(LL x) { if (x < minx) shift_solution(x, y, a, b, sign_b);
return (x & (x-1)) == 0; if (x > maxx) return 0;
} 9.3 Diophantine Equations long long lx1 = x;
inline int ceil_log2(LL x) { shift_solution(x, y, a, b, (maxx - x) / b);
int ans = 0; long long gcd(long long a, long long b, long long &x, if (x > maxx) shift_solution(x, y, a, b, -sign_b);
--x; long long &y) { long long rx1 = x;
while (x != 0) { if (a == 0) {
x >>= 1; x = 0; shift_solution(x, y, a, b, -(miny - y) / a);
ans++; y = 1; if (y < miny) shift_solution(x, y, a, b, -sign_a);
} return b; if (y > maxy) return 0;
return ans; } long long lx2 = x;
} long long x1, y1;
long long d = gcd(b % a, a, x1, y1); shift_solution(x, y, a, b, -(maxy - y) / a);
/* Returns the convolution of the two given vectors in x = y1 - (b / a) * x1; if (y > maxy) shift_solution(x, y, a, b, sign_a);
time proportional to n*log(n). y = x1; long long rx2 = x;
* The number of roots of unity to use nroots_unity return d;
must be set so that the product of the first } if (lx2 > rx2) swap(lx2, rx2);
* nroots_unity primes of the vector nth_roots_unity is long long lx = max(lx1, lx2);
greater than the maximum value of the bool find_any_solution(long long a, long long b, long long long rx = min(rx1, rx2);
* convolution. Never use sizes of vectors bigger than long c, long long &x0,
2^24, if you need to change the values of long long &y0, long long &g) { if (lx > rx) return 0;
* the nth roots of unity to appropriate primes for g = gcd(abs(a), abs(b), x0, y0); return (rx - lx) / abs(b) + 1;
those sizes. if (c % g) { }
*/ return false;
vector<LL> convolve(const vector<LL> &a, const }
vector<LL> &b, int nroots_unity = 2) {
int N = 1 << ceil_log2(a.size() + b.size()); x0 *= c / g; 9.4 Discrete Logarithm
vector<LL> ans(N,0), fA(N), fB(N), fC(N); y0 *= c / g;
LL modulo = 1; if (a < 0) x0 = -x0;
for (int times = 0; times < nroots_unity; times++) { if (b < 0) y0 = -y0;
fill(fA.begin(), fA.end(), 0); return true; // Computes x which a ^ x = b mod n.
PTIT.Nutriboost 28

for (int i = 3; i <= S; i += 2) if (!sieve[i]) { last = next;


long long d_log(long long a, long long b, long long n) { cp.push_back({i, i * i / 2}); next = mod_mul(last, last, n);
long long m = ceil(sqrt(n)); for (int j = i * i; j <= S; j += 2 * i) if (next == 1) {
long long aj = 1; sieve[j] = 1; return last != n - 1;
map<long long, long long> M; } }
for (int i = 0; i < m; ++i) { for (int L = 1; L <= R; L += S) { }
if (!M.count(aj)) array<bool, S> block{}; return next != 1;
M[aj] = i; for (auto &[p, idx] : cp) }
aj = (aj * a) % n; for (int i=idx; i < S+L; idx =
} (i+=p)) block[i-L] = 1;
rep(i,0,min(S, R - L)) // Checks if a number is prime with prob 1 - 1 / (2 ^
long long coef = mod_pow(a, n - 2, n); if (!block[i]) pr.push_back((L + it)
coef = mod_pow(coef, m, n); i) * 2 + 1); // D(miller_rabin(99999999999999997LL) == 1);
// coef = a ^ (-m) } // D(miller_rabin(9999999999971LL) == 1);
long long gamma = b; for (int i : pr) isPrime[i] = 1; // D(miller_rabin(7907) == 1);
for (int i = 0; i < m; ++i) { return pr; bool miller_rabin(long long n, int it = rounds) {
if (M.count(gamma)) { } if (n <= 1) return false;
return i * m + M[gamma]; if (n == 2) return true;
} else { if (n % 2 == 0) return false;
gamma = (gamma * coef) % n; for (int i = 0; i < it; ++i) {
} 9.7 Highest Exponent Factorial long long a = rand() % (n - 1) + 1;
} if (witness(a, n)) {
return -1; return false;
} int highest_exponent(int p, const int &n){ }
int ans = 0; }
int t = p; return true;
while(t <= n){ }
9.5 Ext Euclidean ans += n/t;
t*=p;
}
void ext_euclid(long long a, long long b, long long &x, return ans;
long long &y, long long &g) { } 9.9 Mod Integer
x = 0, y = 1, g = b;
long long m, n, q, r;
for (long long u = 1, v = 0; a != 0; g = a, a = r) { template<class T, T mod>
q = g / a, r = g % a; 9.8 Miller - Rabin struct mint_t {
m = x - u * q, n = y - v * q; T val;
x = u, y = v, u = m, v = n; mint_t() : val(0) {}
} const int rounds = 20; mint_t(T v) : val(v % mod) {}
}
// checks whether a is a witness that n is not prime, 1 mint_t operator + (const mint_t& o) const {
< a < n return (val + o.val) % mod;
bool witness(long long a, long long n) { }
9.6 Fast Eratosthenes // check as in Miller Rabin Primality Test described mint_t operator - (const mint_t& o) const {
long long u = n - 1; return (val - o.val) % mod;
int t = 0; }
const int LIM = 1e6; while (u % 2 == 0) { mint_t operator * (const mint_t& o) const {
bitset<LIM> isPrime; t++; return (val * o.val) % mod;
vi eratosthenes() { u >>= 1; }
const int S = (int)round(sqrt(LIM)), R = LIM / } };
2; long long next = mod_pow(a, u, n);
vi pr = {2}, sieve(S+1); if (next == 1) return false; typedef mint_t<long long, 998244353> mint;
pr.reserve(int(LIM/log(LIM)*1.1)); long long last;
vector<pii> cp; for (int i = 0; i < t; ++i) {
PTIT.Nutriboost 29

9.10 Mod Inv if (j < i) swap(a[i], a[j]);


/* The following vector of pairs contains pairs (prime, }
generator) }
long long mod_inv(long long n, long long m) { * where the prime has an Nth root of unity for N being
long long x, y, gcd; a power of two.
ext_euclid(n, m, x, y, gcd); * The generator is a number g s.t g^(p-1)=1 (mod p)
if (gcd != 1) * but is different from 1 for all smaller powers */ 9.14 Pollard Rho Factorize
return 0; vector<PLL> nth_roots_unity {
return (x + m) % m; {1224736769,330732430},{1711276033,927759239},{167772161,167489322},
} long long pollard_rho(long long n) {
{469762049,343261969},{754974721,643797295},{1107296257,883865065}};
long long x, y, i = 1, k = 2, d;
PLL ext_euclid(LL a, LL b) { x = y = rand() % n;
if (b == 0) while (1) {
9.11 Mod Mul return make_pair(1,0); ++i;
pair<LL,LL> rc = ext_euclid(b, a % b); x = mod_mul(x, x, n);
return make_pair(rc.second, rc.first - (a / b) * x += 2;
// Computes (a * b) % mod rc.second); if (x >= n) x -= n;
long long mod_mul(long long a, long long b, long long } if (x == y) return 1;
mod) { d = __gcd(abs(x - y), n);
long long x = 0, y = a % mod; //returns -1 if there is no unique modular inverse if (d != 1) return d;
while (b > 0) { LL mod_inv(LL x, LL modulo) { if (i == k) {
if (b & 1) PLL p = ext_euclid(x, modulo); y = x;
x = (x + y) % mod; if ( (p.first * x + p.second * modulo) != 1 ) k *= 2;
y = (y * 2) % mod; return -1; }
b /= 2; return (p.first+modulo) % modulo; }
} } return 1;
return x % mod; }
}
//Number theory fft. The size of a must be a power of 2
void ntfft(vector<LL> &a, int dir, const PLL // Returns a list with the prime divisors of n
&root_unity) { vector<long long> factorize(long long n) {
9.12 Mod Pow int n = a.size(); vector<long long> ans;
LL prime = root_unity.first; if (n == 1)
LL basew = mod_pow(root_unity.second, (prime-1) / n, return ans;
// Computes ( a ^ exp ) % mod. prime); if (miller_rabin(n)) {
long long mod_pow(long long a, long long exp, long long if (dir < 0) basew = mod_inv(basew, prime); ans.push_back(n);
mod) { for (int m = n; m >= 2; m >>= 1) { } else {
long long ans = 1; int mh = m >> 1; long long d = 1;
while (exp > 0) { LL w = 1; while (d == 1)
if (exp & 1) for (int i = 0; i < mh; i++) { d = pollard_rho(n);
ans = mod_mul(ans, a, mod); for (int j = i; j < n; j += m) { vector<long long> dd = factorize(d);
a = mod_mul(a, a, mod); int k = j + mh; ans = factorize(n / d);
exp >>= 1; LL x = (a[j] - a[k] + prime) % prime; for (int i = 0; i < dd.size(); ++i)
} a[j] = (a[j] + a[k]) % prime; ans.push_back(dd[i]);
return ans; a[k] = (w * x) % prime; }
} } return ans;
w = (w * basew) % prime; }
}
basew = (basew * basew) % prime;
9.13 Number Theoretic Transform }
int i = 0; 9.15 Primes
for (int j = 1; j < n - 1; j++) {
typedef long long int LL; for (int k = n >> 1; k > (i ^= k); k >>= 1);
typedef pair<LL, LL> PLL; namespace primes {
PTIT.Nutriboost 30

const int MP = 100001; ans.emplace_back(primes[i], expo); 10 Probability and Statistics


bool sieve[MP]; }
long long primes[MP]; } 10.1 Continuous Distributions
int num_p;
void fill_sieve() { if (n > 1) { 10.1.1 Uniform distribution
num_p = 0; ans.emplace_back(n, 1);
sieve[0] = sieve[1] = true; } If the probability density function is constant between a and
for (long long i = 2; i < MP; ++i) { return ans; b and 0 elsewhere it is U(a, b), a < b.
if (!sieve[i]) { }  1
primes[num_p++] = i; } a<x<b
f (x) = b−a
for (long long j = i * i; j < MP; j += i)
0 otherwise
sieve[j] = true;
}
}
a+b 2 (b − a)2
µ= ,σ =
} 2 12

// Finds prime numbers between a and b, using basic 10.1.2 Exponential distribution
primes up to sqrt(b) 9.16 Totient Sieve
// a must be greater than 1. The time between events in a Poisson process is Exp(λ), λ >
vector<long long> seg_sieve(long long a, long long b) 0.
λe−λx x ≥ 0

{
long long ant = a; for (int i = 1; i < MN; i++) f (x) =
phi[i] = i; 0 x<0
a = max(a, 3LL);
vector<bool> pmap(b - a + 1); 1 2 1
for (int i = 1; i < MN; i++) µ= ,σ = 2
long long sqrt_b = sqrt(b); λ λ
for (int i = 0; i < num_p; ++i) { if (!sieve[i]) // is prime
long long p = primes[i]; for (int j = i; j < MN; j += i)
phi[j] -= phi[j] / i; 10.1.3 Normal distribution
if (p > sqrt_b) break;
long long j = (a + p - 1) / p; Most real random values with mean µ and variance σ 2 are
for (long long v = (j == 1) ? p + p : j * p; v <= well described by N (µ, σ 2 ), σ > 0.
b; v += p) {
pmap[v - a] = true; 1 (x−µ)2

} f (x) = √ e 2σ2
} 2πσ 2
vector<long long> ans;
if (ant == 2) ans.push_back(2); 9.17 Totient If X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ) then
int start = a % 2 ? 0 : 1;
aX1 + bX2 + c ∼ N (µ1 + µ2 + c, a2 σ12 + b2 σ22 )
for (int i = start, I = b - a + 1; i < I; i += 2)
if (pmap[i] == false) long long totient(long long n) {
ans.push_back(a + i); if (n == 1) return 0; 10.2 Discrete Distributions
return ans; long long ans = n;
} for (int i = 0; primes[i] * primes[i] <= n; ++i) { 10.2.1 Binomial distribution
if ((n % primes[i]) == 0) {
vector<pair<int, int>> factor(int n) { while ((n % primes[i]) == 0) n /= primes[i]; The number of successes in n independent yes/no exper-
vector<pair<int, int>> ans; ans -= ans / primes[i]; iments, each which yields success with probability p is
if (n == 0) return ans; } Bin(n, p), n = 1, 2, . . . , 0 ≤ p ≤ 1.
for (int i = 0; primes[i] * primes[i] <= n; ++i) { } !
if ((n % primes[i]) == 0) { if (n > 1) { n k
int expo = 0; ans -= ans / n; p(k) = p (1 − p)n−k
while ((n % primes[i]) == 0) { } k
expo++; return ans;
n /= primes[i]; } µ = np, σ 2 = np(1 − p)
}
Bin(n, p) is approximately Po(np) for small p.
PTIT.Nutriboost 31

10.2.2 First success distribution (A "addq %%rdx, %0\n adcq $0,%0" : "+a"(r) : Node *next[Alphabets];
B); return r; } int sum;
The number of trials needed to get the first success in inde- OP(+,,"d"(o.x)) OP(*,"mul %1\n", "r"(o.x) : Node() : fail(NULL), next{}, sum(0) { }
pendent yes/no experiments, each wich yields success with "rdx") };
probability p is Fs(p), 0 ≤ p ≤ 1. H operator-(H o) { return *this + ~o.x; }
ull get() const { return x + !~x; } struct String {
bool operator==(H o) const { return get() == string str;
p(k) = p(1 − p)k−1 , k = 1, 2, . . .
o.get(); } int sign;
1 2 1−p bool operator<(H o) const { return get() < };
µ= ,σ = o.get(); }
p p2 }; public:
static const H C = (ll)1e11+3; // (order ~ 3e9; random //totalLen = sum of (len + 1)
10.2.3 Poisson distribution also ok) void init(int totalLen) {
nodes.resize(totalLen);
The number of events occurring in a fixed period of time t struct HashInterval { nNodes = 0;
if these events occur with a known average rate κ and inde- vector<H> ha, pw; strings.clear();
HashInterval(string& str) : ha(sz(str)+1), roots.clear();
pendently of the time since the last event is Po(λ), λ = tκ. pw(ha) { sizes.clear();
pw[0] = 1; que.resize(totalLen);
λk
p(k) = e−λ , k = 0, 1, 2, . . . rep(i,0,sz(str)) }
k! ha[i+1] = ha[i] * C + str[i],
pw[i+1] = pw[i] * C; void insert(const string &str, int sign) {
µ = λ, σ 2 = λ } strings.push_back(String{ str, sign });
H hashInterval(int a, int b) { // hash [a, b) roots.push_back(nodes.data() + nNodes);
return ha[b] - ha[a] * pw[b - a]; sizes.push_back(1);
10.3 Probability Theory } nNodes += (int)str.size() + 1;
}; auto check = [&]() { return sizes.size() > 1 &&
Let X be a discrete random variable with probability pX (x) sizes.end()[-1] == sizes.end()[-2]; };
of assuming the valueP x. It will then have an expected value vector<H> getHashes(string& str, int length) { if(!check())
(mean) µ = E(X) = Px xpX (x) and variance σ 2 = V (X) = if (sz(str) < length) return {}; makePMA(strings.end() - 1, strings.end(),
E(X 2 ) − (E(X))2 = x (x − E(X))2 pX (x) where σ is the H h = 0, pw = 1; roots.back(), que);
rep(i,0,length) while(check()) {
standard deviation. If X is instead continuous it will have a h = h * C + str[i], pw = pw * C; int m = sizes.back();
probability density function fX (x) and the sums above will vector<H> ret = {h}; roots.pop_back();
instead be integrals with pX (x) replaced by fX (x). rep(i,length,sz(str)) { sizes.pop_back();
Expectation is linear: ret.push_back(h = h * C + str[i] - pw * sizes.back() += m;
str[i-length]); if(!check())
E(aX + bY ) = aE(X) + bE(Y ) } makePMA(strings.end() - m * 2, strings.end(),
return ret; roots.back(), que);
} }
For independent X and Y ,
}
H hashString(string& s){H h{}; for(char c:s)
V (aX + bY ) = a2 V (X) + b2 V (Y ). h=h*C+c;return h;} int match(const string &str) const {
int res = 0;
for(const Node *t : roots)
11 Strings res += matchPMA(t, str);
11.2 Incremental Aho Corasick return res;
11.1 Hashing }
class IncrementalAhoCorasic { private:
struct H { static const int Alphabets = 26; static void makePMA(vector<String>::const_iterator
typedef uint64_t ull; static const int AlphabetBase = ’a’; begin, vector<String>::const_iterator end, Node
ull x; H(ull x=0) : x(x) {} struct Node { *nodes, vector<Node*> &que) {
#define OP(O,A,B) H operator O(H o) { ull r = x; asm \ Node *fail;
PTIT.Nutriboost 32

int nNodes = 0; int nNodes;


Node *root = new(&nodes[nNodes ++]) Node(); vector<String> strings;
for(auto it = begin; it != end; ++ it) { vector<Node*> roots;
Node *t = root; vector<int> sizes;
for(char c : it->str) { vector<Node*> que; 11.4 Minimal String Rotation
Node *&n = t->next[c - AlphabetBase]; };
if(n == nullptr)
n = new(&nodes[nNodes ++]) Node(); int main() { // Lexicographically minimal string rotation
t = n; int m; int lmsr() {
} while(~scanf("%d", &m)) { string s;
t->sum += it->sign; IncrementalAhoCorasic iac; cin >> s;
} iac.init(600000); int n = s.size();
int qt = 0; rep(i, m) { s += s;
for(Node *&n : root->next) { int ty; vector<int> f(s.size(), -1);
if(n != nullptr) { char s[300001]; int k = 0;
n->fail = root; scanf("%d%s", &ty, s); for (int j = 1; j < 2 * n; ++j) {
que[qt ++] = n; if(ty == 1) { int i = f[j - k - 1];
} else { iac.insert(s, +1); while (i != -1 && s[j] != s[k + i + 1]) {
n = root; } else if(ty == 2) { if (s[j] < s[k + i + 1])
} iac.insert(s, -1); k = j - i - 1;
} } else if(ty == 3) { i = f[i];
for(int qh = 0; qh != qt; ++ qh) { int ans = iac.match(s); }
Node *t = que[qh]; printf("%d\n", ans); if (i == -1 && s[j] != s[k + i + 1]) {
int a = 0; fflush(stdout); if (s[j] < s[k + i + 1]) {
for(Node *n : t->next) { } else { k = j;
if(n != nullptr) { abort(); }
que[qt ++] = n; } f[j - k] = -1;
Node *r = t->fail; } } else {
while(r->next[a] == nullptr) } f[j - k] = i + 1;
r = r->fail; return 0; }
n->fail = r->next[a]; } }
n->sum += r->next[a]->sum; return k;
} }
++ a;
} 11.3 KMP
}
} 11.5 Suffix Array
vi pi(const string& s) {
static int matchPMA(const Node *t, const string &str) vi p(sz(s));
{ rep(i,1,sz(s)) { const int MAXN = 200005;
int res = 0; int g = p[i-1];
for(char c : str) { while (g && s[i] != s[g]) g = p[g-1]; const int MAX_DIGIT = 256;
int a = c - AlphabetBase; p[i] = g + (s[i] == s[g]); void countingSort(vector<int>& SA, vector<int>& RA, int
while(t->next[a] == nullptr) } k = 0) {
t = t->fail; return p; int n = SA.size();
t = t->next[a]; } vector<int> cnt(max(MAX_DIGIT, n), 0);
res += t->sum; for (int i = 0; i < n; i++)
} vi match(const string& s, const string& pat) { if (i + k < n)
return res; vi p = pi(pat + ’\0’ + s), res; cnt[RA[i + k]]++;
} rep(i,sz(p)-sz(s),sz(p)) else
if (p[i] == sz(pat)) res.push_back(i - 2 cnt[0]++;
* sz(pat)); for (int i = 1; i < cnt.size(); i++)
vector<Node> nodes; return res; cnt[i] += cnt[i - 1];
} vector<int> tempSA(n);
PTIT.Nutriboost 33

for (int i = n - 1; i >= 0; i--) LCP[i] = PLCP[SA[i]]; sa[cur].num_paths += sa[p].num_paths;


if (SA[i] + k < n) return LCP; tot_paths += sa[p].num_paths;
tempSA[--cnt[RA[SA[i] + k]]] = SA[i]; } }
else
tempSA[--cnt[0]] = SA[i]; if (p == -1) {
SA = tempSA; sa[cur].link = 0;
} 11.6 Suffix Automation } else {
int q = sa[p].next[c];
vector <int> constructSA(string s) { if (sa[p].len + 1 == sa[q].len) {
int n = s.length(); /* sa[cur].link = q;
vector <int> SA(n); * Suffix automaton: } else {
vector <int> RA(n); * This implementation was extended to maintain int clone = sz++;
vector <int> tempRA(n); (online) the sa[clone].len = sa[p].len + 1;
for (int i = 0; i < n; i++) { * number of different substrings. This is equivalent sa[clone].next = sa[q].next;
RA[i] = s[i]; to compute sa[clone].num_paths = 0;
SA[i] = i; * the number of paths from the initial state to all sa[clone].link = sa[q].link;
} the other for (; p!= -1 && sa[p].next[c] == q; p =
for (int step = 1; step < n; step <<= 1) { * states. sa[p].link) {
countingSort(SA, RA, step); * sa[p].next[c] = clone;
countingSort(SA, RA, 0); * The overall complexity is O(n) sa[q].num_paths -= sa[p].num_paths;
int c = 0; * can be tested here: sa[clone].num_paths += sa[p].num_paths;
tempRA[SA[0]] = c; https://www.urionlinejudge.com.br/judge/en/problems/view/1530 }
for (int i = 1; i < n; i++) { * */ sa[q].link = sa[cur].link = clone;
if (RA[SA[i]] == RA[SA[i - 1]] && RA[SA[i] + }
step] == RA[SA[i - 1] + step]) struct state { }
tempRA[SA[i]] = tempRA[SA[i - 1]]; int len, link; last = cur;
else long long num_paths; }
tempRA[SA[i]] = tempRA[SA[i - 1]] + 1; map<int, int> next;
} };
RA = tempRA;
if (RA[SA[n - 1]] == n - 1) break; const int MN = 200011;
} state sa[MN << 1]; 11.7 Suffix Tree
return SA; int sz, last;
} long long tot_paths;
struct SuffixTree {
vector<int> computeLCP(const string& s, const void sa_init() { enum { N = 200010, ALPHA = 26 }; // N ~
vector<int>& SA) { sz = 1; 2*maxlen+10
int n = SA.size(); last = 0; int toi(char c) { return c - ’a’; }
vector<int> LCP(n), PLCP(n), c(n, 0); sa[0].len = 0; string a; // v = cur node, q = cur position
for (int i = 0; i < n; i++) sa[0].link = -1; int t[N][ALPHA],l[N],r[N],p[N],s[N],v=0,q=0,m=2;
c[SA[i]] = i; sa[0].next.clear();
int k = 0; sa[0].num_paths = 1; void ukkadd(int i, int c) { suff:
for (int j, i = 0; i < n-1; i++) { tot_paths = 0; if (r[v]<=q) {
if(c[i] - 1 < 0) } if (t[v][c]==-1) { t[v][c]=m;
continue; l[m]=i;
j = SA[c[i] - 1]; void sa_extend(int c) { p[m++]=v; v=s[v]; q=r[v];
k = max(k - 1, 0); int cur = sz++; goto suff; }
while (i+k < n && j+k < n && s[i + k] == s[j + sa[cur].len = sa[last].len + 1; v=t[v][c]; q=l[v];
k]) sa[cur].next.clear(); }
k++; sa[cur].num_paths = 0; if (q==-1 || c==toi(a[q])) q++; else {
PLCP[i] = k; int p; l[m+1]=i; p[m+1]=m; l[m]=l[v];
} for (p = last; p != -1 && !sa[p].next.count(c); p = r[m]=q;
for (int i = 0; i < n; i++) sa[p].link) { p[m]=p[v]; t[m][c]=m+1;
sa[p].next[c] = cur; t[m][toi(a[q])]=v;
PTIT.Nutriboost 34

l[v]=q; p[v]=m; int mask = 0, len = node ? olen + z[i] = r - l;r--;


t[p[m]][toi(a[l[m]])]=m; (r[node] - l[node]) : 0; }else{
v=s[p[m]]; q=l[m]; rep(c,0,ALPHA) if (t[node][c] != -1) int k = i-l;
while (q<r[m]) { mask |= lcs(t[node][c], i1, i2, if(z[k] < r - i +1) z[i] = z[k];
v=t[v][toi(a[q])]; len); else {
q+=r[v]-l[v]; } if (mask == 3) l = i;
if (q==r[m]) s[m]=v; else best = max(best, {len, r[node] - while(r < n and s[r - l] == s[r])r++;
s[m]=m+2; len}); z[i] = r - l;r--;
q=r[v]-(q-r[m]); m+=2; goto suff; return mask; }
} } }
} static pii LCS(string s, string t) { }
SuffixTree st(s + (char)(’z’ + 1) + t + return z;
SuffixTree(string a) : a(a) { (char)(’z’ + 2)); }
fill(r,r+N,sz(a)); st.lcs(0, sz(s), sz(s) + 1 + sz(t), 0);
memset(s, 0, sizeof s); return st.best; int main(){
memset(t, -1, sizeof t); }
fill(t[1],t[1]+ALPHA,0); }; //string line;cin>>line;
s[0] = 1; l[0] = l[1] = -1; r[0] = r[1] string line = "alfalfa";
= p[0] = p[1] = 0; vector<int> z = compute_z(line);
rep(i,0,sz(a)) ukkadd(i, toi(a[i]));
} 11.8 Z Algorithm for(int i = 0; i < z.size(); ++i ){
if(i)cout<<" ";
// example: find longest common substring (uses cout<<z[i];
ALPHA = 28) vector<int> compute_z(const string &s){ }
pii best; int n = s.size(); cout<<endl;
int lcs(int node, int i1, int i2, int olen) { vector<int> z(n,0);
if (l[node] <= i1 && i1 < r[node]) int l,r; // must print "0 0 0 4 0 0 1"
return 1; r = l = 0;
if (l[node] <= i2 && i2 < r[node]) for(int i = 1; i < n; ++i){ return 0;
return 2; if(i > r) { }
l = r = i;
while(r < n and s[r - l] == s[r])r++;

You might also like