0% found this document useful (0 votes)
2 views

note2

The document is a team notebook for PTIT.Nutriboost, dated December 11, 2024, containing a comprehensive list of algorithms, data structures, dynamic programming techniques, geometry problems, graph algorithms, and number theory concepts. Each section includes specific algorithms and methods, along with their respective page numbers. It serves as a reference guide for various computational topics and techniques.

Uploaded by

tran hieu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

note2

The document is a team notebook for PTIT.Nutriboost, dated December 11, 2024, containing a comprehensive list of algorithms, data structures, dynamic programming techniques, geometry problems, graph algorithms, and number theory concepts. Each section includes specific algorithms and methods, along with their respective page numbers. It serves as a reference guide for various computational topics and techniques.

Uploaded by

tran hieu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Team notebook

PTIT.Nutriboost
December 11, 2024

Contents 5.6 Hungarian . . . . . . . . . . . . . . . . . . . . . . . 12 9.5 Ext Euclidean . . . . . . . . . . . . . . . . . . . . . 21


5.7 Konig’s Theorem . . . . . . . . . . . . . . . . . . . 13 9.6 Fast Eratosthenes . . . . . . . . . . . . . . . . . . . 21
1 Algorithms 1 5.8 MCMF . . . . . . . . . . . . . . . . . . . . . . . . 13
1.1 Mo’s Algorithm . . . . . . . . . . . . . . . . . . . . 1 5.9 Manhattan MST . . . . . . . . . . . . . . . . . . . 14 9.7 Miller - Rabin . . . . . . . . . . . . . . . . . . . . . 22
1.2 Mo’s Algorithms on Trees . . . . . . . . . . . . . . 1 5.10 Minimum Path Cover in DAG . . . . . . . . . . . . 14 9.8 Mod Integer . . . . . . . . . . . . . . . . . . . . . . 22
1.3 Mo’s With Update . . . . . . . . . . . . . . . . . . 1 5.11 Planar Graph (Euler) . . . . . . . . . . . . . . . . 14
1.4 Parallel Binary Search . . . . . . . . . . . . . . . . 2 9.9 Number Theoretic Transform . . . . . . . . . . . . 22
5.12 Push Relabel . . . . . . . . . . . . . . . . . . . . . 14
5.13 Tarjan SCC . . . . . . . . . . . . . . . . . . . . . . 14 9.10 Pollard Rho Factorize . . . . . . . . . . . . . . . . 22
2 Data Structures 2 5.14 Topological Sort . . . . . . . . . . . . . . . . . . . 15 9.11 Primes . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1 DSU Roll Back . . . . . . . . . . . . . . . . . . . . 2 5.15 Virtual Tree . . . . . . . . . . . . . . . . . . . . . . 15
2.2 HLD with Euler Tour . . . . . . . . . . . . . . . . 3 9.12 Totient Sieve . . . . . . . . . . . . . . . . . . . . . 23
2.3 Hash Table . . . . . . . . . . . . . . . . . . . . . . 3 6 Linear Algebra 16 9.13 Totient . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4 Li Chao Tree . . . . . . . . . . . . . . . . . . . . . 4 6.1 Matrix Determinant . . . . . . . . . . . . . . . . . 16
2.5 Line Container . . . . . . . . . . . . . . . . . . . . 4 6.2 PolyRoots . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Link Cut Tree . . . . . . . . . . . . . . . . . . . . . 4 6.3 Polynomial . . . . . . . . . . . . . . . . . . . . . . 16 10 Probability and Statistics 23
2.7 Persistent DSU . . . . . . . . . . . . . . . . . . . . 5 10.1 Continuous Distributions . . . . . . . . . . . . . . 23
2.8 SQRT Tree . . . . . . . . . . . . . . . . . . . . . . 5 7 Maths 16
2.9 STL Treap . . . . . . . . . . . . . . . . . . . . . . 7 7.1 Factorial Approximate . . . . . . . . . . . . . . . . 16 10.1.1 Uniform distribution . . . . . . . . . . . . . 23
2.10 Sparse Table . . . . . . . . . . . . . . . . . . . . . 7 7.2 Factorial . . . . . . . . . . . . . . . . . . . . . . . . 16 10.1.2 Exponential distribution . . . . . . . . . . . 23
2.11 Trie . . . . . . . . . . . . . . . . . . . . . . . . . . 7 7.3 Fast Fourier Transform . . . . . . . . . . . . . . . . 16
10.1.3 Normal distribution . . . . . . . . . . . . . 23
2.12 Wavelet Tree . . . . . . . . . . . . . . . . . . . . . 7 7.4 General purpose numbers . . . . . . . . . . . . . . 17
7.5 Lucas Theorem . . . . . . . . . . . . . . . . . . . . 17 10.2 Discrete Distributions . . . . . . . . . . . . . . . . 23
3 Dynamic Programming Optimization 8 7.6 Math . . . . . . . . . . . . . . . . . . . . . . . . . . 17 10.2.1 Binomial distribution . . . . . . . . . . . . 23
3.1 Convex Hull Trick . . . . . . . . . . . . . . . . . . 8 7.7 Mobius . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Divide and Conquer . . . . . . . . . . . . . . . . . 8 7.8 Multinomial . . . . . . . . . . . . . . . . . . . . . . 18 10.2.2 First success distribution . . . . . . . . . . 23
7.9 Number Theoretic Transform . . . . . . . . . . . . 18 10.2.3 Poisson distribution . . . . . . . . . . . . . 23
4 Geometry 8 7.10 Others . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.1 Closest Pair Problem . . . . . . . . . . . . . . . . . 8 10.3 Probability Theory . . . . . . . . . . . . . . . . . . 24
7.11 Primitive Root . . . . . . . . . . . . . . . . . . . . 19
4.2 Convex Diameter . . . . . . . . . . . . . . . . . . . 9 7.12 Sieve 1e9 . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Pick Theorem . . . . . . . . . . . . . . . . . . . . . 9 7.13 Sigma Function . . . . . . . . . . . . . . . . . . . . 20 11 Strings 24
4.4 Polygon Area . . . . . . . . . . . . . . . . . . . . . 10 7.14 SuperExp . . . . . . . . . . . . . . . . . . . . . . . 20
11.1 Hashing . . . . . . . . . . . . . . . . . . . . . . . . 24
4.5 Square . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.6 Triangle . . . . . . . . . . . . . . . . . . . . . . . . 10 8 Misc 20 11.2 Incremental Aho Corasick . . . . . . . . . . . . . . 24
8.1 Dates . . . . . . . . . . . . . . . . . . . . . . . . . 20 11.3 KMP . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5 Graphs 10
5.1 Dinic . . . . . . . . . . . . . . . . . . . . . . . . . . 10 9 Number Theory 20 11.4 Suffix Array . . . . . . . . . . . . . . . . . . . . . . 25
5.2 Directed MST . . . . . . . . . . . . . . . . . . . . . 11 9.1 Chinese Remainder Theorem . . . . . . . . . . . . 20 11.5 Suffix Automation . . . . . . . . . . . . . . . . . . 25
5.3 Eulerian Path . . . . . . . . . . . . . . . . . . . . . 11 9.2 Convolution . . . . . . . . . . . . . . . . . . . . . . 20
5.4 Gomory Hu . . . . . . . . . . . . . . . . . . . . . . 12 9.3 Diophantine Equations . . . . . . . . . . . . . . . . 21 11.6 Suffix Tree . . . . . . . . . . . . . . . . . . . . . . . 26
5.5 HopCroft Karp . . . . . . . . . . . . . . . . . . . . 12 9.4 Discrete Logarithm . . . . . . . . . . . . . . . . . . 21 11.7 Z Algorithm . . . . . . . . . . . . . . . . . . . . . . 26

1
PTIT.Nutriboost 2

1 Algorithms Case 2: P != u // we need to handle the case where we update an index that
Our query would be in range [EN(u), ST(v)] + [ST(p), ST(p)] is inside
1.1 Mo’s Algorithm */ // [cur_l, cur_r]
//
void update(int &L, int &R, int qL, int qR){ // Mo algorithm with updates {{{
/* while (L > qL) add(--L); enum QueryType { GET = 0, UPDATE = 1 };
https://www.spoj.com/problems/FREQ2/ while (R < qR) add(++R);
*/ struct Query {
vector <int> MoQueries(int n, vector <query> Q){ while (L < qL) del(L++); int l, r; // For get
while (R > qR) del(R--); int u, val, old_val; // For update
block_size = sqrt(n); } int id;
sort(Q.begin(), Q.end(), [](const query &A, const query &B){ QueryType typ;
return (A.l/block_size != B.l/block_size)? vector <int> MoQueries(int n, vector <query> Q){ };
(A.l/block_size < B.l/block_size) : (A.r < B.r); block_size = sqrt((int)nodes.size());
}); sort(Q.begin(), Q.end(), [](const query &A, const query &B){ template<typename Add, typename Rem, typename Update, typename
vector <int> res; return (ST[A.l]/block_size != ST[B.l]/block_size)? Get>
res.resize((int)Q.size()); (ST[A.l]/block_size < ST[B.l]/block_size) : void mo_with_updates(
(ST[A.r] < ST[B.r]); int n, const vector<Query>& queries,
int L = 1, R = 0; }); Add add, Rem rem, Update update, Get get) {
for(query q: Q){ vector <int> res; // Separate update and get queries
while (L > q.l) add(--L); res.resize((int)Q.size()); vector<Query> updates, gets;
while (R < q.r) add(++R); for (const auto& query : queries) {
LCA lca; if (query.typ == QueryType::UPDATE)
while (L < q.l) del(L++); lca.initialize(n); updates.push_back(query);
while (R > q.r) del(R--); else gets.push_back(query);
int L = 1, R = 0; }
res[q.pos] = calc(1, R-L+1); for(query q: Q){
} int u = q.l, v = q.r; // Sort queries
return res; if(ST[u] > ST[v]) swap(u, v); // assume that S[u] <= int S = std::max<int>(1, cbrtl(n + 0.5));
} S[v] S = S * S;
int parent = lca.get(u, v);
sort(gets.begin(), gets.end(), [&] (const Query& q1, const
if(parent == u){ Query& q2) {
1.2 Mo’s Algorithms on Trees int qL = ST[u], qR = ST[v]; int l1 = q1.l / S;
update(L, R, qL, qR); int l2 = q2.l / S;
}else{ if (l1 != l2) return l1 < l2;
/* int qL = EN[u], qR = ST[v];
Given a tree with N nodes and Q queries. Each node has an update(L, R, qL, qR); int r1 = q1.r / S;
integer weight. if(cnt_val[a[parent]] == 0) int r2 = q2.r / S;
Each query provides two numbers u and v, ask for how many res[q.pos] += 1; if (r1 != r2) return (l1 % 2 == 0) ? r1 < r2 : r1 >
different integers weight of nodes } r2;
there are on path from u to v.
res[q.pos] += cur_ans; return (r1 % 2 == 0)
---------- } ? q1.id < q2.id
Modify DFS: return res; : q1.id > q2.id;
---------- } });
For each node u, maintain the start and the end DFS time. Let’s
call them ST(u) and EN(u). // Process queries
=> For each query, a node is considered if its occurrence count int cur_l = -1, cur_r = -1, cur_update = -1;
is one. 1.3 Mo’s With Update for (const auto& query : gets) {
// move to [l, r]
-------------- if (cur_l < 0) {
Query solving: // Tested: for (int i = query.l; i <= query.r; ++i) add(i);
-------------- // - https://www.spoj.com/problems/ADAUNIQ/ cur_l = query.l;
Let’s query be (u, v). Assume that ST(u) <= ST(v). Denotes P as // cur_r = query.r;
LCA(u, v). //Notes: } else {
//- Updates must be set: A(u) = val while (cur_l > query.l) add(--cur_l);
Case 1: P = u //- When implementing Update(id, new_value, cur_l, cur_r) -> while (cur_r < query.r) add(++cur_r);
Our query would be in range [ST(u), ST(v)]. void: while (cur_r > query.r) rem(cur_r--);
// [cur_l, cur_r] = current segment while (cur_l < query.l) rem(cur_l++);
PTIT.Nutriboost 3

} return 0; //
} // 0-based
// process updates // DSU with rollback {{{
// should we update more? void work() struct Data {
while (cur_update + 1 < (int) updates.size() { int time, u, par; // before ‘time‘, ‘par‘ = par[u]
&& updates[cur_update + 1].id < query.id) { for(int i=1;i<=q;i++) };
++cur_update; vec[i].clear();
update(updates[cur_update].u, for(int i=1;i<=n;i++) struct DSU {
updates[cur_update].val, cur_l, cur_r); if(mid[i]>0) vector<int> par;
} vec[mid[i]].push_back(i); vector<Data> change;
// should we update less? clear();
while (cur_update >= 0 && updates[cur_update].id > for(int i=1;i<=q;i++) DSU(int n) : par(n + 5, -1) {}
query.id) { {
update(updates[cur_update].u, apply(i); // find root of x.
updates[cur_update].old_val, cur_l, cur_r); for(auto &it:vec[i]) //Add appropriate check // if par[x] < 0 then x is a root, and its tree has -par[x]
--cur_update; conditions nodes
} { int getRoot(int x) {
if(check(it)) while (par[x] >= 0)
get(query); hi[it]=i; x = par[x];
} else return x;
} lo[it]=i+1; }
// }}} }
} bool same_component(int u, int v) {
} return getRoot(u) == getRoot(v);
}
1.4 Parallel Binary Search void parallel_binary()
{ // join components containing x and y.
for(int i=1;i<=n;i++) // t should be current time. We use it to update ‘change‘.
int lo[N], mid[N], hi[N]; lo[i]=1, hi[i]=q+1; bool join(int x, int y, int t) {
vector<int> vec[N]; bool changed = 1; x = getRoot(x);
while(changed) y = getRoot(y);
void clear() //Reset { if (x == y) return false;
{ changed=0;
memset(bit, 0, sizeof(bit)); for(int i=1;i<=n;i++) //union by rank
} { if (par[x] < par[y]) swap(x, y);
if(lo[i]<hi[i]) //now x’s tree has less nodes than y’s tree
void apply(int idx) //Apply ith update/query { change.push_back({t, y, par[y]});
{ changed=1; par[y] += par[x];
if(ql[idx] <= qr[idx]) mid[i]=(lo[i] + hi[i])/2; change.push_back({t, x, par[x]});
update(ql[idx], qa[idx]), update(qr[idx]+1, } par[x] = y;
-qa[idx]); else return true;
else mid[i]=-1; }
{ }
update(1, qa[idx]); work(); // rollback all changes at time > t.
update(qr[idx]+1, -qa[idx]); } void rollback(int t) {
update(ql[idx], qa[idx]); } while (!change.empty() && change.back().time > t) {
} par[change.back().u] = change.back().par;
} change.pop_back();
}
bool check(int idx) //Check if the condition is satisfied }
{
2 Data Structures };
int req=reqd[idx]; // }}}
for(auto &it:owns[idx]) 2.1 DSU Roll Back
{
req-=pref(it);
if(req<0) // Tested: 2.2 HLD with Euler Tour
break; // - (dynamic connectivity)
} https://codeforces.com/gym/100551/problem/A
if(req<=0) // - (used for directed MST) /*
return 1; https://judge.yosupo.jp/problem/directedmst HLD + Euler Tour combine:
PTIT.Nutriboost 4

1. Update or Query subtree of u: [st(u), en(u)] } t; }


2. Update or Query path of (u, v) if (dep[u] > dep[v]) swap(u, v);
*/ vector<int> g[N]; t.upd(1, 1, n, st[u], st[v], val);
const int N = 1e5 + 9, LG = 18, inf = 1e9 + 9; int par[N][LG + 1], dep[N], sz[N]; }
void dfs(int u, int p = 0) { //https://www.hackerrank.com/challenges/subtrees-and-paths/problem
struct ST { par[u][0] = p;
#define lc (n << 1) dep[u] = dep[p] + 1;
#define rc ((n << 1) | 1) sz[u] = 1;
int t[4 * N], lazy[4 * N]; for (int i = 1; i <= LG; i++) par[u][i] = par[par[u][i - 2.3 Hash Table
ST() { 1]][i - 1];
fill(t, t + 4 * N, -inf); if (p) g[u].erase(find(g[u].begin(), g[u].end(), p));
fill(lazy, lazy + 4 * N, 0); for (auto &v : g[u]) if (v != p) { // faster unordered_map
} dfs(v, u); #include<bits/stdc++.h>
inline void push(int n, int b, int e) { sz[u] += sz[v]; using namespace std;
if(lazy[n] == 0) return; if(sz[v] > sz[g[u][0]]) swap(v, g[u][0]); #include<ext/pb_ds/assoc_container.hpp>
t[n] = t[n] + lazy[n]; } #include<ext/pb_ds/tree_policy.hpp>
if(b != e) { } using namespace __gnu_pbds;
lazy[lc] = lazy[lc] + lazy[n]; int lca(int u, int v) { struct custom_hash {
lazy[rc] = lazy[rc] + lazy[n]; if (dep[u] < dep[v]) swap(u, v); static uint64_t splitmix64(uint64_t x) {
} for (int k = LG; k >= 0; k--) if (dep[par[u][k]] >= dep[v]) u x += 0x9e3779b97f4a7c15;
lazy[n] = 0; = par[u][k]; x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9;
} if (u == v) return u; x = (x ^ (x >> 27)) * 0x94d049bb133111eb;
inline int combine(int a, int b) { for (int k = LG; k >= 0; k--) if (par[u][k] != par[v][k]) u = return x ^ (x >> 31);
return max(a, b); //merge left and right queries par[u][k], v = par[v][k]; }
} return par[u][0]; size_t operator()(uint64_t x) const {
inline void pull(int n) { } static const uint64_t FIXED_RANDOM =
t[n] = max(t[lc], t[rc]); //merge lower nodes of the tree int kth(int u, int k) { chrono::steady_clock::now().time_since_epoch().count();
to get the parent node assert(k >= 0); return splitmix64(x + FIXED_RANDOM);
} for (int i = 0; i <= LG; i++) if (k & (1 << i)) u = par[u][i]; }
void build(int n, int b, int e) { return u; };
if(b == e) { } gp_hash_table<int, int, custom_hash> mp;
t[n] = 0; int T, head[N], st[N], en[N];
return; void dfs_hld(int u) {
} st[u] = ++T;
int mid = (b + e) >> 1; for (auto v : g[u]) { 2.4 Li Chao Tree
build(lc, b, mid); head[v] = (v == g[u][0] ? head[u] : v);
build(rc, mid + 1, e); dfs_hld(v);
pull(n); } // LiChao SegTree
} en[u] = T; // Copied from https://judge.yosupo.jp/submission/60250
void upd(int n, int b, int e, int i, int j, int v) { } //
push(n, b, e); // Tested:
if(j < b || e < i) return; int n; // - https://judge.yosupo.jp/problem/segment_add_get_min
if(i <= b && e <= j) { // - https://judge.yosupo.jp/problem/line_add_get_min
lazy[n] += v; int query_path(int u, int v) { // - (convex hull trick) https://oj.vnoi.info/problem/vmpizza
push(n, b, e); int ans = -inf; // - https://oj.vnoi.info/problem/vomario
return; while(head[u] != head[v]) { using ll = long long;
} if (dep[head[u]] < dep[head[v]]) swap(u, v); const ll inf = 2e18;
int mid = (b + e) >> 1; ans = max(ans, t.query(1, 1, n, st[head[u]], st[u]));
upd(lc, b, mid, i, j, v); u = par[head[u]][0]; struct Line {
upd(rc, mid + 1, e, i, j, v); } ll m, c;
pull(n); if (dep[u] > dep[v]) swap(u, v); ll eval(ll x) {
} ans = max(ans, t.query(1, 1, n, st[u], st[v])); return m * x + c;
int query(int n, int b, int e, int i, int j) { return ans; }
push(n, b, e); } };
if(i > e || b > j) return -inf; struct node {
if(i <= b && e <= j) return t[n]; void update_path(int u, int v, int val) { Line line;
int mid = (b + e) >> 1; while(head[u] != head[v]) { node* left = nullptr;
return combine(query(lc, b, mid, i, j), query(rc, mid + 1, if (dep[head[u]] < dep[head[v]]) swap(u, v); node* right = nullptr;
e, i, j)); t.upd(1, 1, n, st[head[u]], st[u], val); node(Line line) : line(line) {}
} u = par[head[u]][0]; void add_segment(Line nw, int l, int r, int L, int R) {
PTIT.Nutriboost 5

if (l > r || r < L || l > R) return; void add_line(Line line) {


int m = (l + 1 == r ? l : (l + r) / 2); root -> add_segment(line, L, R, L, R); /**
if (l >= L and r <= R) { } * Author: Simon Lindholm
bool lef = nw.eval(l) < line.eval(l); // y = mx + b: x in [l, r] * Date: 2016-07-25
bool mid = nw.eval(m) < line.eval(m); void add_segment(Line line, int l, int r) { * Source:
if (mid) swap(line, nw); root -> add_segment(line, L, R, l, r); https://github.com/ngthanhtrung23/ACM_Notebook_new/blob/master/Dat
if (l == r) return; } * Description: Represents a forest of unrooted trees. You can
if (lef != mid) { ll query(ll x) { add and remove
if (left == nullptr) left = new node(nw); return root -> query_segment(x, L, R, L, R); * edges (as long as the result is still a forest), and check
else left -> add_segment(nw, l, m, L, R); } whether
} ll query_segment(ll x, int l, int r) { * two nodes are in the same tree.
else { return root -> query_segment(x, l, r, L, R); * Time: All operations take amortized O(\log N).
if (right == nullptr) right = new node(nw); } * Status: Stress-tested a bit for N <= 20
else right -> add_segment(nw, m + 1, r, L, R); }; */
} // https://judge.yosupo.jp/problem/segment_add_get_min #pragma once
return;
} struct Node { // Splay tree. Root’s pp contains tree’s parent.
if (max(l, L) <= min(m, R)) { Node *p = 0, *pp = 0, *c[2];
bool flip = 0;
if (left == nullptr) left = new node({0, inf}); 2.5 Line Container Node() { c[0] = c[1] = 0; fix(); }
left -> add_segment(nw, l, m, L, R);
} void fix() {
if (max(m + 1, L) <= min(r, R)) { if (c[0]) c[0]->p = this;
struct Line { if (c[1]) c[1]->p = this;
if (right == nullptr) right = new node ({0, inf}); mutable ll a, b, p;
right -> add_segment(nw, m + 1, r, L, R); // (+ update sum of subtree elements etc. if
bool operator<(const Line& o) const { return a < o.a; } wanted)
} bool operator<(ll x) const { return p < x; }
} }
}; void pushFlip() {
ll query_segment(ll x, int l, int r, int L, int R) {
if (l > r || r < L || l > R) return inf; if (!flip) return;
struct DynamicHull : multiset<Line, less<>> { flip = 0; swap(c[0], c[1]);
int m = (l + 1 == r ? l : (l + r) / 2); // Maintain to get maximum
if (l >= L and r <= R) { if (c[0]) c[0]->flip ^= 1;
// (for doubles, use inf = 1/.0, div(a,b) = a/b) if (c[1]) c[1]->flip ^= 1;
ll ans = line.eval(x); static const ll inf = LLONG_MAX;
if (l < r) { }
ll div(ll a, ll b) { // floored division int up() { return p ? p->c[1] == this : -1; }
if (x <= m && left != nullptr) ans = min(ans, left -> return a / b - ((a ^ b) < 0 && a % b); }
query_segment(x, l, m, L, R)); void rot(int i, int b) {
bool isect(iterator x, iterator y) { int h = i ^ b;
if (x > m && right != nullptr) ans = min(ans, right -> if (y == end()) return x->p = inf, 0;
query_segment(x, m + 1, r, L, R)); Node *x = c[i], *y = b == 2 ? x : x->c[h], *z =
if (x->a == y->a) x->p = x->b > y->b ? inf : b ? y : x;
} -inf;
return ans; if ((y->p = p)) p->c[up()] = y;
else x->p = div(y->b - x->b, x->a - y->a); c[i] = z->c[i ^ 1];
} return x->p >= y->p;
ll ans = inf; if (b < 2) {
} x->c[h] = y->c[h ^ 1];
if (max(l, L) <= min(m, R)) { void add(ll a, ll b) {
if (left == nullptr) left = new node({0, inf}); y->c[h ^ 1] = x;
auto z = insert({a, b, 0}), y = z++, x = y; }
ans = min(ans, left -> query_segment(x, l, m, L, R)); while (isect(y, z)) z = erase(z);
} z->c[i ^ 1] = this;
if (x != begin() && isect(--x, y)) isect(x, y = fix(); x->fix(); y->fix();
if (max(m + 1, L) <= min(r, R)) { erase(y));
if (right == nullptr) right = new node ({0, inf}); if (p) p->fix();
while ((y = x) != begin() && (--x)->p >= y->p) swap(pp, y->pp);
ans = min(ans, right -> query_segment(x, m + 1, r, L, R)); isect(x, erase(y));
} }
} void splay() { /// Splay this up to the root. Always
return ans; ll qry(ll x) {
} finishes without flip set.
assert(!empty()); for (pushFlip(); p; ) {
}; auto l = *lower_bound(x); if (p->p) p->p->pushFlip();
return l.a * x + l.b; p->pushFlip(); pushFlip();
struct LiChaoTree { }
int L, R; int c1 = up(), c2 = p->up();
}; if (c2 == -1) p->rot(c1, 2);
node* root;
LiChaoTree() : L(numeric_limits<int>::min() / 2), else p->p->rot(c2, c1 != c2);
R(numeric_limits<int>::max() / 2), root(nullptr) {} }
LiChaoTree(int L, int R) : L(L), R(R) { }
root = new node({0, inf}); 2.6 Link Cut Tree Node* first() { /// Return the min element of the
} subtree rooted at this, splayed to the top.
PTIT.Nutriboost 6

pushFlip(); // PersistentDSU 2.8 SQRT Tree


return c[0] ? c[0]->first() : (splay(), this); //
} // Notes:
}; // - this doesn’t support delete edge operation, so isn’t #include<bits/stdc++.h>
enough to using namespace std;
struct LinkCut { // solve dynamic connectivity problem.
vector<Node> node; // - it has high mem and time usage, so be careful (both TLE /*Given an array a that contains n elements and the
LinkCut(int N) : node(N) {} and MLE on operation op that satisfies associative property:
// https://oj.vnoi.info/problem/hello22_schoolplan) (x op y) op z=x op (y op z) is true for any x, y, z.
void link(int u, int v) { // add an edge (u, v) //
assert(!connected(u, v)); // Tested: The following implementation of Sqrt Tree can perform the
makeRoot(&node[u]); // - https://judge.yosupo.jp/problem/persistent_unionfind following operations:
node[u].pp = &node[v]; #include "../PersistentArray.h" build in O(nloglogn),
} struct PersistentDSU { answer queries in O(1) and update an element in O(sqrt(n)).*/
void cut(int u, int v) { // remove an edge (u, v) int n;
Node *x = &node[u], *top = &node[v]; using Arr = PersistentArray<int>; #define SqrtTreeItem int//change for the type you want
makeRoot(top); x->splay();
assert(top == (x->pp ?: x->c[0])); PersistentDSU(int _n) : n(_n) { SqrtTreeItem op(const SqrtTreeItem &a, const SqrtTreeItem &b) {
if (x->pp) x->pp = 0; roots.emplace_back(A.build(std::vector<int> (n, -1))); return a + b; //just change this operation for different
else { } problems,no change is required inside the code
x->c[0] = top->p = 0; }
x->fix(); int find(int version, int u) {
} // Note that we can’t do path compression here inline int log2Up(int n) {
} int p = A.get(roots[version], u); int res = 0;
bool connected(int u, int v) { // are u, v in the same return p < 0 ? u : find(version, p); while ((1 << res) < n) {
tree? } res++;
Node* nu = access(&node[u])->first(); }
return nu == access(&node[v])->first(); // Note that this will always create a new version, return res;
} // regardless of whether u and v was previously in same }
void makeRoot(Node* u) { /// Move u to root of component. //0-indexed
represented tree. bool merge(int version, int u, int v) { struct SqrtTree {
access(u); u = find(version, u); int n, llg, indexSz;
u->splay(); v = find(version, v); vector<SqrtTreeItem> v;
if(u->c[0]) { auto ptr = roots[version]; vector<int> clz, layers, onLayer;
u->c[0]->p = 0; if (u != v) { vector< vector<SqrtTreeItem> > pref, suf, between;
u->c[0]->flip ^= 1; int sz_u = -A.get(ptr, u), sz_v = -A.get(ptr, v);
u->c[0]->pp = u; if (sz_u < sz_v) swap(u, v); inline void buildBlock(int layer, int l, int r) {
u->c[0] = 0; // sz[u] >= sz[v] pref[layer][l] = v[l];
u->fix(); ptr = A.set(ptr, u, -sz_u - sz_v); for (int i = l + 1; i < r; i++) {
} ptr = A.set(ptr, v, u); pref[layer][i] = op(pref[layer][i - 1], v[i]);
} } }
Node* access(Node* u) { /// Move u to root aux tree. suf[layer][r - 1] = v[r - 1];
Return the root of the root aux tree. roots.emplace_back(ptr); for (int i = r - 2; i >= l; i--) {
u->splay(); return u != v; suf[layer][i] = op(v[i], suf[layer][i + 1]);
while (Node* pp = u->pp) { } }
pp->splay(); u->pp = 0; }
if (pp->c[1]) { int component_size(int version, int u) {
pp->c[1]->p = 0; pp->c[1]->pp = return -A.get(roots[version], find(version, u)); inline void buildBetween(int layer, int lBound, int rBound,
pp; } } int betweenOffs) {
pp->c[1] = u; pp->fix(); u = pp; int bSzLog = (layers[layer] + 1) >> 1;
} bool same_component(int version, int u, int v) { int bCntLog = layers[layer] >> 1;
return u; return find(version, u) == find(version, v); int bSz = 1 << bSzLog;
} } int bCnt = (rBound - lBound + bSz - 1) >> bSzLog;
}; for (int i = 0; i < bCnt; i++) {
Arr A; SqrtTreeItem ans;
vector<Arr::Node*> roots; for (int j = i; j < bCnt; j++) {
}; SqrtTreeItem add = suf[layer][lBound + (j << bSzLog)];
2.7 Persistent DSU ans = (i == j) ? add : op(ans, add);
between[layer - 1][betweenOffs + lBound + (i <<
bCntLog) + j] = ans;
PTIT.Nutriboost 7

} if (l + 1 == r) { between.assign(betweenLayers, vector<SqrtTreeItem>((1 <<


} return op(v[l], v[r]); llg) + bSz));
} } build(0, 0, n, 0);
int layer = onLayer[clz[(l - base) ^ (r - base)]]; }
inline void buildBetweenZero() { int bSzLog = (layers[layer] + 1) >> 1; };
int bSzLog = (llg + 1) >> 1; int bCntLog = layers[layer] >> 1; int main() {
for (int i = 0; i < indexSz; i++) { int lBound = (((l - base) >> layers[layer]) << int i, j, k, n, m, q, l, r;
v[n + i] = suf[0][i << bSzLog]; layers[layer]) + base; cin >> n;
} int lBlock = ((l - lBound) >> bSzLog) + 1; vector<int> v;
build(1, n, n + indexSz, (1 << llg) - n); int rBlock = ((r - lBound) >> bSzLog) - 1; for(i = 0; i < n; i++) cin >> k, v.push_back(k);
} SqrtTreeItem ans = suf[layer][l]; SqrtTree t = SqrtTree(v);
if (lBlock <= rBlock) { cin >> q;
inline void updateBetweenZero(int bid) { SqrtTreeItem add = (layer == 0) ? ( while(q--) {
int bSzLog = (llg + 1) >> 1; query(n + lBlock, n + rBlock, (1 << llg) - n, cin >> l >> r;
v[n + bid] = suf[0][bid << bSzLog]; n) --l, --r;
update(1, n, n + indexSz, (1 << llg) - n, n + bid); ) : ( cout << t.query(l, r) << endl;
} between[layer - 1][betweenOffs + lBound + }
(lBlock << bCntLog) + rBlock] }
void build(int layer, int lBound, int rBound, int ); // https://cp-algorithms.com/data_structures/sqrt-tree.html
betweenOffs) { ans = op(ans, add);
if (layer >= (int)layers.size()) { }
return; ans = op(ans, pref[layer][r]);
} return ans; 2.9 STL Treap
int bSz = 1 << ((layers[layer] + 1) >> 1); }
for (int l = lBound; l < rBound; l += bSz) {
int r = min(l + bSz, rBound); inline SqrtTreeItem query(int l, int r) { struct Node {
buildBlock(layer, l, r); return query(l, r, 0, 0); Node *l = 0, *r = 0;
build(layer + 1, l, r, betweenOffs); } int val, y, c = 1;
} Node(int val) : val(val), y(rand()) {}
if (layer == 0) { inline void update(int x, const SqrtTreeItem &item) { void recalc();
buildBetweenZero(); v[x] = item; };
} else { update(0, 0, n, 0, x);
buildBetween(layer, lBound, rBound, betweenOffs); } int cnt(Node* n) { return n ? n->c : 0; }
} void Node::recalc() { c = cnt(l) + cnt(r) + 1; }
} SqrtTree(const vector<SqrtTreeItem>& a)
: n((int)a.size()), llg(log2Up(n)), v(a), clz(1 << llg), template<class F> void each(Node* n, F f) {
void update(int layer, int lBound, int rBound, int onLayer(llg + 1) { if (n) { each(n->l, f); f(n->val); each(n->r, f); }
betweenOffs, int x) { clz[0] = 0; }
if (layer >= (int)layers.size()) { for (int i = 1; i < (int)clz.size(); i++) {
return; clz[i] = clz[i >> 1] + 1; pair<Node*, Node*> split(Node* n, int k) {
} } if (!n) return {};
int bSzLog = (layers[layer] + 1) >> 1; int tllg = llg; if (cnt(n->l) >= k) { // "n->val >= k" for
int bSz = 1 << bSzLog; while (tllg > 1) { lower_bound(k)
int blockIdx = (x - lBound) >> bSzLog; onLayer[tllg] = (int)layers.size(); auto pa = split(n->l, k);
int l = lBound + (blockIdx << bSzLog); layers.push_back(tllg); n->l = pa.second;
int r = min(l + bSz, rBound); tllg = (tllg + 1) >> 1; n->recalc();
buildBlock(layer, l, r); } return {pa.first, n};
if (layer == 0) { for (int i = llg - 1; i >= 0; i--) { } else {
updateBetweenZero(blockIdx); onLayer[i] = max(onLayer[i], onLayer[i + 1]); auto pa = split(n->r, k - cnt(n->l) - 1); // and
} else { } just "k"
buildBetween(layer, lBound, rBound, betweenOffs); int betweenLayers = max(0, (int)layers.size() - 1); n->r = pa.first;
} int bSzLog = (llg + 1) >> 1; n->recalc();
update(layer + 1, l, r, betweenOffs, x); int bSz = 1 << bSzLog; return {n, pa.second};
} indexSz = (n + bSz - 1) >> bSzLog; }
v.resize(n + indexSz); }
inline SqrtTreeItem query(int l, int r, int betweenOffs, int pref.assign(layers.size(), vector<SqrtTreeItem>(n +
base) { indexSz)); Node* merge(Node* l, Node* r) {
if (l == r) { suf.assign(layers.size(), vector<SqrtTreeItem>(n + if (!l) return r;
return v[l]; indexSz)); if (!r) return l;
} if (l->y > r->y) {
PTIT.Nutriboost 8

l->r = merge(l->r, r); } wavelet_tree *l, *r;


l->recalc(); }; int *b, *c, bsz, csz; // c holds the prefix sum of elements
return l;
} else { wavelet_tree() {
r->l = merge(l, r->l); lo = 1;
r->recalc(); 2.11 Trie hi = 0;
return r; bsz = 0;
} csz = 0, l = NULL;
} const int MN = 26; // size of alphabet r = NULL;
const int MS = 100010; // Number of states. }
Node* ins(Node* t, Node* n, int pos) {
auto pa = split(t, pos); struct trie{ void init(int *from, int *to, int x, int y) {
return merge(merge(pa.first, n), pa.second); struct node{ lo = x, hi = y;
} int c; if(from >= to) return;
int a[MN]; int mid = (lo + hi) >> 1;
// Example application: move the range [l, r) to index k }; auto f = [mid](int x) {
void move(Node*& t, int l, int r, int k) { return x <= mid;
Node *a, *b, *c; node tree[MS]; };
tie(a,b) = split(t, l); tie(b,c) = split(b, r - l); int nodes; b = (int*)malloc((to - from + 2) * sizeof(int));
if (k <= l) t = merge(ins(a, b, k), c); bsz = 0;
else t = merge(a, ins(c, b, k - r)); void clear(){ b[bsz++] = 0;
} tree[nodes].c = 0; c = (int*)malloc((to - from + 2) * sizeof(int));
memset(tree[nodes].a, -1, sizeof tree[nodes].a); csz = 0;
nodes++; c[csz++] = 0;
} for(auto it = from; it != to; it++) {
2.10 Sparse Table b[bsz] = (b[bsz - 1] + f(*it));
void init(){ c[csz] = (c[csz - 1] + (*it));
nodes = 0; bsz++;
clear(); csz++;
template <typename T, typename func = function<T(const T, const } }
T)>> if(hi == lo) return;
struct SparseTable { int add(const string &s, bool query = 0){ auto pivot = stable_partition(from, to, f);
func calc; int cur_node = 0; l = new wavelet_tree();
int n; for(int i = 0; i < s.size(); ++i){ l->init(from, pivot, lo, mid);
vector<vector<T>> ans; int id = gid(s[i]); r = new wavelet_tree();
if(tree[cur_node].a[id] == -1){ r->init(pivot, to, mid + 1, hi);
SparseTable() {} if(query) return 0; }
tree[cur_node].a[id] = nodes; //kth smallest element in [l, r]
SparseTable(const vector<T>& a, const func& f) : clear(); //for array [1,2,1,3,5] 2nd smallest is 1 and 3rd smallest is
n(a.size()), calc(f) { } 2
int last = trunc(log2(n)) + 1; cur_node = tree[cur_node].a[id]; int kth(int l, int r, int k) {
ans.resize(n); } if(l > r) return 0;
for (int i = 0; i < n; i++){ if(!query) tree[cur_node].c++; if(lo == hi) return lo;
ans[i].resize(last); return tree[cur_node].c; int inLeft = b[r] - b[l - 1], lb = b[l - 1], rb = b[r];
} } if(k <= inLeft) return this->l->kth(lb + 1, rb, k);
for (int i = 0; i < n; i++){ return this->r->kth(l - lb, r - rb, k - inLeft);
ans[i][0] = a[i]; }; }
} //count of numbers in [l, r] Less than or equal to k
for (int j = 1; j < last; j++){ int LTE(int l, int r, int k) {
for (int i = 0; i <= n - (1 << j); i++){ if(l > r || k < lo) return 0;
ans[i][j] = calc(ans[i][j - 1], ans[i + (1 << (j 2.12 Wavelet Tree if(hi <= k) return r - l + 1;
- 1))][j - 1]); int lb = b[l - 1], rb = b[r];
} return this->l->LTE(lb + 1, rb, k) + this->r->LTE(l - lb, r
} const int MAXN = (int)3e5 + 9; - rb, k);
} const int MAXV = (int)1e9 + 9; //maximum value of any element }
in array //count of numbers in [l, r] equal to k
T query(int l, int r){ //array values can be negative too, use appropriate minimum and int count(int l, int r, int k) {
assert(0 <= l && l <= r && r < n); maximum value if(l > r || k < lo || k > hi) return 0;
int k = trunc(log2(r - l + 1)); struct wavelet_tree { if(lo == hi) return r - l + 1;
return calc(ans[l][k], ans[r - (1 << k) + 1][k]); int lo, hi; int lb = b[l - 1], rb = b[r];
PTIT.Nutriboost 9

int mid = (lo + hi) >> 1; int lo = 0, hi = memo.size() - 1; }


if(k <= mid) return this->l->count(lb + 1, rb, k); while (lo != hi){
return this->r->count(l - lb, r - rb, k); int mi = (lo + hi) / 2;
} if (Fn(memo[mi], x) > Fn(memo[mi + 1], x)){
//sum of numbers in [l ,r] less than or equal to k lo = mi + 1;
int sum(int l, int r, int k) { } 4 Geometry
if(l > r or k < lo) return 0; else hi = mi;
if(hi <= k) return c[r] - c[l - 1]; } 4.1 Closest Pair Problem
int lb = b[l - 1], rb = b[r]; return Fn(memo[lo], x);
return this->l->sum(lb + 1, rb, k) + this->r->sum(l - lb, r }
- rb, k); struct point {
} const int N = 1e6 + 1; double x, y;
~wavelet_tree() { long dp[N]; int id;
delete l; point() {}
delete r; int main() point (double a, double b) : x(a), y(b) {}
} { };
}; fastio;
wavelet_tree t; int n, c; cin >> n >> c; double dist(const point &o, const point &p) {
vector<line> memo; double a = p.x - o.x, b = p.y - o.y;
for (int i = 1; i <= n; i++){ return sqrt(a * a + b * b);
long val; cin >> val; }
addLine(memo, {-2 * val, val * val + dp[i - 1]});
3 Dynamic Programming Optimization dp[i] = query(memo, val) + val * val + c; double cp(vector<point> &p, vector<point> &x, vector<point> &y)
} {
3.1 Convex Hull Trick cout << dp[n] << ’\n’; if (p.size() < 4) {
return 0; double best = 1e100;
} for (int i = 0; i < p.size(); ++i)
#define long long long for (int j = i + 1; j < p.size(); ++j)
#define pll pair <long, long> best = min(best, dist(p[i], p[j]));
#define all(c) c.begin(), c.end() return best;
#define fastio ios_base::sync_with_stdio(false); cin.tie(0) 3.2 Divide and Conquer }

struct line{ int ls = (p.size() + 1) >> 1;


long a, b; /** double l = (p[ls - 1].x + p[ls].x) * 0.5;
line() {}; * recurrence: vector<point> xl(ls), xr(p.size() - ls);
line(long a, long b) : a(a), b(b) {}; * dp[k][i] = min dp[k-1][j] + c[i][j - 1], for all j > i; unordered_set<int> left;
bool operator < (const line &A) const { * for (int i = 0; i < ls; ++i) {
return pll(a,b) < pll(A.a,A.b); * "comp" computes dp[k][i] for all i in O(n log n) (k is fixed) xl[i] = x[i];
} * left.insert(x[i].id);
}; * Problems: }
* https://icpc.kattis.com/problems/branch for (int i = ls; i < p.size(); ++i) {
bool bad(line A, line B, line C){ * http://codeforces.com/contest/321/problem/E xr[i - ls] = x[i];
return (C.b - B.b) * (A.a - B.a) <= (B.b - A.b) * (B.a - * */ }
C.a);
} void comp(int l, int r, int le, int re) { vector<point> yl, yr;
if (l > r) return; vector<point> pl, pr;
void addLine(vector<line> &memo, line cur){ yl.reserve(ls); yr.reserve(p.size() - ls);
int k = memo.size(); int mid = (l + r) >> 1; pl.reserve(ls); pr.reserve(p.size() - ls);
while (k >= 2 && bad(memo[k - 2], memo[k - 1], cur)){ for (int i = 0; i < p.size(); ++i) {
memo.pop_back(); int best = max(mid + 1, le); if (left.count(y[i].id))
k--; dp[cur][mid] = dp[cur ^ 1][best] + cost(mid, best - 1); yl.push_back(y[i]);
} for (int i = best; i <= re; i++) { else
memo.push_back(cur); if (dp[cur][mid] > dp[cur ^ 1][i] + cost(mid, i - 1)) { yr.push_back(y[i]);
} best = i;
dp[cur][mid] = dp[cur ^ 1][i] + cost(mid, i - 1); if (left.count(p[i].id))
long Fn(line A, long x){ } pl.push_back(p[i]);
return A.a * x + A.b; } else
} pr.push_back(p[i]);
comp(l, mid - 1, le, best); }
long query(vector<line> &memo, long x){ comp(mid + 1, r, best, re);
PTIT.Nutriboost 10

double dl = cp(pl, xl, yl); vector <point> convex; maxi = i;


double dr = cp(pr, xr, yr); sort(points.begin(), points.end(), [](const point &A, const maxj = j;
double d = min(dl, dr); point &B){ }
vector<point> yp; yp.reserve(p.size()); return (A.x == B.x)? (A.y < B.y): (A.x < B.x); }while(i != is || j != js);
for (int i = 0; i < p.size(); ++i) { }); return sqrt(maxd);
if (fabs(y[i].x - l) < d) vector <point> Up, Down; }
yp.push_back(y[i]); point A = points[0], B = points.back();
} Up.push_back(A);
for (int i = 0; i < yp.size(); ++i) { Down.push_back(A);
for (int j = i + 1; j < yp.size() && j < i + 7; ++j) { 4.3 Pick Theorem
d = min(d, dist(yp[i], yp[j])); for(int i = 0; i < points.size(); i++){
} if(i == points.size()-1 || cross(A, points[i], B) > 0){
} while(Up.size() > 2 && cross(Up[Up.size()-2], struct point{
return d; Up[Up.size()-1], points[i]) <= 0) ll x, y;
} Up.pop_back(); };
Up.push_back(points[i]);
double closest_pair(vector<point> &p) { } //Pick: S = I + B/2 - 1
vector<point> x(p.begin(), p.end()); if(i == points.size()-1 || cross(A, points[i], B) < 0){
sort(x.begin(), x.end(), [](const point &a, const point &b) { while(Down.size() > 2 && cross(Down[Down.size()-2], ld polygonArea(vector <point> &points){
return a.x < b.x; Down[Down.size()-1], points[i]) >= 0) int n = (int)points.size();
}); Down.pop_back(); ld area = 0.0;
vector<point> y(p.begin(), p.end()); Down.push_back(points[i]); int j = n-1;
sort(y.begin(), y.end(), [](const point &a, const point &b) { } for(int i = 0; i < n; i++){
return a.y < b.y; } area += (points[j].x + points[i].x) * (points[j].y -
}); for(int i = 0; i < Up.size(); i++) convex.push_back(Up[i]); points[i].y);
return cp(p, x, y); for(int i = Down.size()-2; i > 0; i--) j = i;
} convex.push_back(Down[i]); }
return convex;
} return abs(area/2.0);
}
4.2 Convex Diameter int dist(point A, point B){
return (A.x - B.x)*(A.x - B.x) + (A.y - B.y)*(A.y - B.y); ll boundary(vector <point> points){
} int n = (int)points.size();
struct point{ ll num_bound = 0;
int x, y; double findConvexDiameter(vector <point> convexHull){ for(int i = 0; i < n; i++){
}; int n = convexHull.size(); ll dx = (points[i].x - points[(i+1)%n].x);
ll dy = (points[i].y - points[(i+1)%n].y);
struct vec{ int is = 0, js = 0; num_bound += abs(__gcd(dx, dy)) - 1;
int x, y; for(int i = 1; i < n; i++){ }
}; if(convexHull[i].y > convexHull[is].y) return num_bound;
is = i; }
vec operator - (const point &A, const point &B){ if(convexHull[js].y > convexHull[i].y)
return vec{A.x - B.x, A.y - B.y}; js = i;
} }
4.4 Polygon Area
int cross(vec A, vec B){ int maxd = dist(convexHull[is], convexHull[js]);
return A.x*B.y - A.y*B.x; int i, maxi, j, maxj;
} i = maxi = is; #include <bits/stdc++.h>
j = maxj = js; using namespace std;
int cross(point A, point B, point C){ do{ struct Point {
int val = A.x*(B.y - C.y) + B.x*(C.y - A.y) + C.x*(A.y - int ni = (i+1)%n, nj = (j+1)%n; int x, y;
B.y); if(cross(convexHull[ni] - convexHull[i], convexHull[nj] Point(int a = 0, int b = 0) : x(a), y(b) {}
if(val == 0) - convexHull[j]) <= 0){ friend istream &operator>>(istream &in, Point &p) {
return 0; // coline j = nj; int x, y;
if(val < 0) }else{ in >> p.x >> p.y;
return 1; // clockwise i = ni; return in;
return -1; //counter clockwise } }
} int d = dist(convexHull[i], convexHull[j]); };
if(d > maxd){ int main() {
vector <point> findConvexHull(vector <point> points){ maxd = d; int n;
PTIT.Nutriboost 11

cin >> n; return true;


vector<Point> points(n); return false; p = (a + b + c) ∗ 0.5
for (auto &p : points) { cin >> p; } }
points.push_back(points[0]); The inradius is defined by:
bool inside(square &s1, square &s2) { s
// Already rotated in clockwise for (int i = 0; i < 4; ++i)
(p − a)(p − b)(p − c)
long long area = 0; if (point_in_box(s2, s1.edges[i])) iR =
for (int i = 0; i < points.size(); i++) { return true; p
area +=
(1LL * points[i].x * points[i + 1].y - 1LL * return false; The radius of its circumcircle is given by the formula:
points[i].y * points[i + 1].x); }
}
cout << labs(area) << ’\n’; bool inside_vert(square &s1, square &s2) { abc
cR = p
} if ((cmp(s1.y1, s2.y1) != -1 && cmp(s1.y1, s2.y2) != 1) || (a + b + c)(a + b − c)(a + c − b)(b + c − a)
(cmp(s1.y2, s2.y1) != -1 && cmp(s1.y2, s2.y2) != 1))
return true;
return false; 5 Graphs
4.5 Square }
5.1 Dinic
bool inside_hori(square &s1, square &s2) {
if ((cmp(s1.x1, s2.x1) != -1 && cmp(s1.x1, s2.x2) != 1) ||
typedef long double ld; (cmp(s1.x2, s2.x1) != -1 && cmp(s1.x2, s2.x2) != 1)) #include<bits/stdc++.h>
return true; using namespace std;
const ld eps = 1e-12; return false; const int N = 5010;
int cmp(ld x, ld y = 0, ld tol = eps) { } const long long inf = 1LL << 61;
return ( x <= y + tol) ? (x + tol < y) ? -1 : 0 : 1; struct Dinic {
} ld min_dist(square &s1, square &s2) { struct edge {
if (inside(s1, s2) || inside(s2, s1)) int to, rev, id;
struct point{ return 0; long long flow, w;
ld x, y; };
point(ld a, ld b) : x(a), y(b) {} ld ans = 1e100; int n, s, t, mxid;
point() {} for (int i = 0; i < 4; ++i) vector<int> d, flow_through;
}; for (int j = 0; j < 4; ++j) vector<int> done;
ans = min(ans, min_dist(s1.edges[i], s2.edges[j])); vector<vector<edge>> g;
struct square{ Dinic() {}
ld x1, x2, y1, y2, Dinic(int _n) {
a, b, c; if (inside_hori(s1, s2) || inside_hori(s2, s1)) { n = _n + 10; mxid = 0; g.resize(n);
point edges[4]; if (cmp(s1.y1, s2.y2) != -1) }
square(ld _a, ld _b, ld _c) { ans = min(ans, s1.y1 - s2.y2); void add_edge(int u, int v, long long w, int id = -1) {
a = _a, b = _b, c = _c; else edge a = {v, (int)g[v].size(), 0, w, id};
x1 = a - c * 0.5; if (cmp(s2.y1, s1.y2) != -1) edge b = {u, (int)g[u].size(), 0, 0, -2};//for
x2 = a + c * 0.5; ans = min(ans, s2.y1 - s1.y2); bidirectional edges cap(b) = w
y1 = b - c * 0.5; } g[u].emplace_back(a); g[v].emplace_back(b);
y2 = b + c * 0.5; mxid = max(mxid, id);
edges[0] = point(x1, y1); if (inside_vert(s1, s2) || inside_vert(s2, s1)) { }
edges[1] = point(x2, y1); if (cmp(s1.x1, s2.x2) != -1) bool bfs() {
edges[2] = point(x2, y2); ans = min(ans, s1.x1 - s2.x2); d.assign(n, -1); d[s] = 0; queue<int> q; q.push(s);
edges[3] = point(x1, y2); else while (!q.empty()) {
} if (cmp(s2.x1, s1.x2) != -1) int u = q.front(); q.pop();
}; ans = min(ans, s2.x1 - s1.x2); for (auto &e : g[u]) {
} int v = e.to;
ld min_dist(point &a, point &b) { if (d[v] == -1 && e.flow < e.w) d[v] = d[u] + 1,
ld x = a.x - b.x, return ans; q.push(v);
y = a.y - b.y; } }
return sqrt(x * x + y * y); }
} return d[t] != -1;
}
bool point_in_box(square s1, point p) { 4.6 Triangle long long dfs(int u, long long flow) {
if (cmp(s1.x1, p.x) != 1 && cmp(s1.x2, p.x) != -1 && if (u == t) return flow;
cmp(s1.y1, p.y) != 1 && cmp(s1.y2, p.y) != -1) Let a, b, c be length of the three sides of a triangle. for (int &i = done[u]; i < (int)g[u].size(); i++) {
PTIT.Nutriboost 12

edge &e = g[u][i]; vi seen(n, -1), path(n), par(n); void dfs(int u)


if (e.w <= e.flow) continue; seen[r] = r; {
int v = e.to; vector<Edge> Q(n), in(n, {-1,-1}), comp; while(g[u].size())
if (d[v] == d[u] + 1) { deque<tuple<int, int, vector<Edge>>> cycs; {
long long nw = dfs(v, min(flow, e.w - e.flow)); rep(s,0,n) { int v = g[u].back();
if (nw > 0) { int u = s, qi = 0, w; g[u].pop_back();
e.flow += nw; while (seen[u] < 0) { dfs(v);
g[v][e.rev].flow -= nw; if (!heap[u]) return {-1,{}}; }
return nw; Edge e = heap[u]->top(); path.push_back(u);
} heap[u]->delta -= e.w, pop(heap[u]); }
} Q[qi] = e, path[qi++] = u, seen[u] = s;
} res += e.w, u = uf.find(e.a); bool getPath(){
return 0; if (seen[u] == s) { /// found cycle, int ctEdges = 0;
} contract vector<int> outDeg, inDeg;
long long max_flow(int _s, int _t) { Node* cyc = 0; outDeg = inDeg = vector<int> (n + 1, 0);
s = _s; t = _t; long long flow = 0; int end = qi, time = uf.time(); for(int i = 1; i <= n; i++)
while (bfs()) { do cyc = merge(cyc, heap[w = {
done.assign(n, 0); while (long long nw = dfs(s, inf)) path[--qi]]); ctEdges += g[i].size();
flow += nw; while (uf.join(u, w)); outDeg[i] += g[i].size();
} u = uf.find(u), heap[u] = cyc, for(auto &u:g[i])
flow_through.assign(mxid + 10, 0); seen[u] = -1; inDeg[u]++;
for(int i = 0; i < n; i++) for(auto e : g[i]) if(e.id >= 0) cycs.push_front({u, time, {&Q[qi], }
flow_through[e.id] = e.flow; &Q[end]}}); int ctMiddle = 0, src = 1;
return flow; } for(int i = 1; i <= n; i++)
} } {
}; rep(i,0,qi) in[uf.find(Q[i].b)] = Q[i]; if(abs(inDeg[i] - outDeg[i]) > 1)
} return 0;
if(inDeg[i] == outDeg[i])
for (auto& [u,t,comp] : cycs) { // restore sol ctMiddle++;
5.2 Directed MST (optional) if(outDeg[i] > inDeg[i])
uf.rollback(t); src = i;
Edge inEdge = in[u]; }
struct Edge { int a, b; ll w; }; for (auto& e : comp) in[uf.find(e.b)] = e; if(ctMiddle != n && ctMiddle + 2 != n)
struct Node { /// lazy skew heap node in[uf.find(inEdge.b)] = inEdge; return 0;
Edge key; } dfs(src);
Node *l, *r; rep(i,0,n) par[i] = in[i].a; reverse(path.begin(), path.end());
ll delta; return {res, par}; return (path.size() == ctEdges + 1);
void prop() { } }
key.w += delta; };
if (l) l->delta += delta;
if (r) r->delta += delta;
delta = 0; 5.3 Eulerian Path
} 5.4 Gomory Hu
Edge top() { prop(); return key; }
}; struct DirectedEulerPath
Node *merge(Node *a, Node *b) { { #include "PushRelabel.cpp"
if (!a || !b) return a ?: b; int n;
a->prop(), b->prop(); vector<vector<int> > g; typedef array<ll, 3> Edge;
if (a->key.w > b->key.w) swap(a, b); vector<int> path; vector<Edge> gomoryHu(int N, vector<Edge> ed) {
swap(a->l, (a->r = merge(b, a->r))); vector<Edge> tree;
return a; void init(int _n){ vi par(N);
} n = _n; rep(i,1,N) {
void pop(Node*& a) { a->prop(); a = merge(a->l, a->r); } g = vector<vector<int> > (n + 1, vector<int> ()); PushRelabel D(N); // Dinic also works
path.clear(); for (Edge t : ed) D.addEdge(t[0], t[1], t[2],
pair<ll, vi> dmst(int n, int r, vector<Edge>& g) { } t[2]);
RollbackUF uf(n); tree.push_back({i, par[i], D.calc(i, par[i])});
vector<Node*> heap(n); void add_edge(int u, int v){ rep(j,i+1,N)
for (Edge e : g) heap[e.b] = merge(heap[e.b], new g[u].push_back(v); if (par[j] == par[i] &&
Node{e}); } D.leftOfMinCut(j)) par[j] = i;
ll res = 0; }
PTIT.Nutriboost 13

return tree; return ans; finish = 0;


} } }
}; void findAugPath() {
int32_t main() { while (!q.empty()) {
ios_base::sync_with_stdio(0); int u = q.front();
5.5 HopCroft Karp cin.tie(0); q.pop();
int n, m, q; for (int v = 1; v <= n; ++v) if (!trace[v]) {
cin >> n >> m >> q; long long w = getC(u, v);
#include<bits/stdc++.h> HopcroftKarp M(n, m); if (!w) {
using namespace std; while (q--) { trace[v] = u;
const int N = 3e5 + 9; int u, v; if (!r[v]) {
struct HopcroftKarp { cin >> u >> v; finish = v;
static const int inf = 1e9; M.add_edge(u, v); return;
int n; } }
vector<int> l, r, d; cout << M.maximum_matching() << ’\n’; q.push(r[v]);
vector<vector<int>> g; return 0; }
HopcroftKarp(int _n, int _m) { } if (d[v] > w) {
n = _n; int p = _n + _m + 1; d[v] = w;
g.resize(p); l.resize(p, 0); r.resize(p, 0); d.resize(p, 0); arg[v] = u;
} }
void add_edge(int u, int v) { 5.6 Hungarian }
g[u].push_back(v + n); //right id is increased by n, so is }
l[u] }
} #include<bits/stdc++.h> void subX_addY() {
bool bfs() { using namespace std; long long delta = inf;
queue<int> q; const int N = 509; for (int v = 1; v <= n; ++v) if (trace[v] == 0 && d[v] <
for (int u = 1; u <= n; u++) { /* Complexity: O(n^3) but optimized delta) {
if (!l[u]) d[u] = 0, q.push(u); It finds minimum cost maximum matching. delta = d[v];
else d[u] = inf; For finding maximum cost maximum matching }
} add -cost and return -matching() // Rotate
d[0] = inf; 1-indexed */ fx[start] += delta;
while (!q.empty()) { struct Hungarian { for (int v = 1; v <= n; ++v) if(trace[v]) {
int u = q.front(); long long c[N][N], fx[N], fy[N], d[N]; int u = r[v];
q.pop(); int l[N], r[N], arg[N], trace[N]; fy[v] -= delta;
for (auto v : g[u]) { queue<int> q; fx[u] += delta;
if (d[r[v]] == inf) { int start, finish, n; } else d[v] -= delta;
d[r[v]] = d[u] + 1; const long long inf = 1e18; for (int v = 1; v <= n; ++v) if (!trace[v] && !d[v]) {
q.push(r[v]); Hungarian() {} trace[v] = arg[v];
} Hungarian(int n1, int n2): n(max(n1, n2)) { if (!r[v]) {
} for (int i = 1; i <= n; ++i) { finish = v;
} fy[i] = l[i] = r[i] = 0; return;
return d[0] != inf; for (int j = 1; j <= n; ++j) c[i][j] = inf; // make it 0 }
} for maximum cost matching (not necessarily with max q.push(r[v]);
bool dfs(int u) { count of matching) }
if (!u) return true; } }
for (auto v : g[u]) { } void Enlarge() {
if(d[r[v]] == d[u] + 1 && dfs(r[v])) { void add_edge(int u, int v, long long cost) { do {
l[u] = v; c[u][v] = min(c[u][v], cost); int u = trace[finish];
r[v] = u; } int nxt = l[u];
return true; inline long long getC(int u, int v) { l[u] = finish;
} return c[u][v] - fx[u] - fy[v]; r[finish] = u;
} } finish = nxt;
d[u] = inf; void initBFS() { } while (finish);
return false; while (!q.empty()) q.pop(); }
} q.push(start); long long maximum_matching() {
int maximum_matching() { for (int i = 0; i <= n; ++i) trace[i] = 0; for (int u = 1; u <= n; ++u) {
int ans = 0; for (int v = 1; v <= n; ++v) { fx[u] = c[u][1];
while (bfs()) { d[v] = getC(start, v); for (int v = 1; v <= n; ++v) {
for(int u = 1; u <= n; u++) if (!l[u] && dfs(u)) ans++; arg[v] = start; fx[u] = min(fx[u], c[u][v]);
} } }
PTIT.Nutriboost 14

} MCMF(int _n) { // 0-based indexing // potential(v) = min({potential[u] + cost[u][v]}) for


for (int v = 1; v <= n; ++v) { n = _n + 10; g.assign(n, vector<int> ()); neg = false; mxid each u -> v and potential[s] = 0
fy[v] = c[1][v] - fx[1]; = 0; d.assign(n, inf);
for (int u = 1; u <= n; ++u) { } d[s] = 0;
fy[v] = min(fy[v], c[u][v] - fx[u]); void add_edge(int u, int v, T cap, T cost, int id = -1, bool bool relax = true;
} directed = true) { for (int i = 0; i < n && relax; i++) {
} if(cost < 0) neg = true; relax = false;
for (int u = 1; u <= n; ++u) { g[u].push_back(e.size()); e.push_back(edge(u, v, cap, cost, for (int u = 0; u < n; u++) {
start = u; id)); for (int k = 0; k < (int)g[u].size(); k++) {
initBFS(); g[v].push_back(e.size()); e.push_back(edge(v, u, 0, -cost, int id = g[u][k]; int v = e[id].v;
while (!finish) { -1)); T cap = e[id].cap, w = e[id].cost;
findAugPath(); mxid = max(mxid, id); if (d[v] > d[u] + w && cap > 0) {
if (!finish) subX_addY(); if(!directed) add_edge(v, u, cap, cost, -1, true); d[v] = d[u] + w;
} } relax = true;
Enlarge(); bool dijkstra() { }
} par.assign(n, -1); }
long long ans = 0; d.assign(n, inf); }
for (int i = 1; i <= n; ++i) { priority_queue<pair<T, T>, vector<pair<T, T>>, }
if (c[i][l[i]] != inf) ans += c[i][l[i]]; greater<pair<T, T>> > q; for(int i = 0; i < n; i++) if(d[i] < inf) potential[i] =
else l[i] = 0; d[s] = 0; d[i];
} q.push(pair<T, T>(0, s)); }
return ans; while (!q.empty()) { while (flow < goal && dijkstra()) flow += send_flow(t, goal
} int u = q.top().second; T nw = q.top().first; q.pop(); - flow);
}; if(nw != d[u]) continue; flow_through.assign(mxid + 10, 0);
for (int i = 0; i < (int)g[u].size(); i++) { for (int u = 0; u < n; u++) {
int id = g[u][i]; int v = e[id].v; for (auto v : g[u]) {
T cap = e[id].cap; if (e[v].id >= 0) flow_through[e[v].id] = e[v ^ 1].cap;
5.7 Konig’s Theorem T w = e[id].cost + potential[u] - potential[v]; }
if (d[u] + w < d[v] && cap > 0) { }
In any bipartite graph, the number of edges in a maximum match- d[v] = d[u] + w; par[v] = id; return make_pair(flow, cost);
ing equals the number of vertices in a minimum vertex cover q.push(pair<T, T>(d[v], v)); }
} };
}
5.8 MCMF }
for (int i = 0; i < n; i++)
if (d[i] < inf) d[i] += (potential[i] - potential[s]); 5.9 Manhattan MST
#include<bits/stdc++.h> for (int i = 0; i < n; i++)
using namespace std; if (d[i] < inf) potential[i] = d[i];
const int N = 3e5 + 9; return d[t] != inf; // for max flow min cost struct point {
//Works for both directed, undirected and with negative cost too // return d[t] <= 0; // for min cost flow long long x, y;
//doesn’t work for negative cycles } };
//for undirected edges just make the directed flag false T send_flow(int v, T cur) {
//Complexity: O(min(E^2 *V log V, E logV * flow)) if(par[v] == -1) return cur; // Returns a list of edges in the format (weight, u, v).
using T = long long; int id = par[v]; int u = e[id].u; // Passing this list to Kruskal algorithm will give the
const T inf = 1LL << 61; T w = e[id].cost; T f = send_flow(u, min(cur, e[id].cap)); Manhattan MST.
struct MCMF { cost += f * w; vector<tuple<long long, int, int>>
struct edge { e[id].cap -= f; e[id ^ 1].cap += f; manhattan_mst_edges(vector<point> ps) {
int u, v, id; return f; vector<int> ids(ps.size());
T cap, cost; } iota(ids.begin(), ids.end(), 0);
edge(int _u, int _v, T _cap, T _cost, int _id) { u = _u; v //returns {maxflow, mincost} vector<tuple<long long, int, int>> edges;
= _v; cap = _cap; cost = _cost; id = _id; } pair<T, T> solve(int _s, int _t, T goal = inf) { for (int rot = 0; rot < 4; rot++) { // for every rotation
}; s = _s; t = _t; sort(ids.begin(), ids.end(), [&](int i, int j){
int n, s, t, mxid; flow = 0, cost = 0; return (ps[i].x + ps[i].y) < (ps[j].x + ps[j].y);
T flow, cost; potential.assign(n, 0); });
vector<vector<int>> g; if (neg) { map<int, int, greater<int>> active; // (xs, id)
vector<edge> e; // Run Bellman-Ford to find starting potential on the for (auto i : ids) {
vector<T> d, potential, flow_through; starting graph for (auto it = active.lower_bound(ps[i].x); it !=
vector<int> par; // If the starting graph (before pushing flow in the active.end();
bool neg; residual graph) is a DAG, active.erase(it++)) {
MCMF() {} // then this can be calculated in O(V + E) using DP: int j = it->second;
PTIT.Nutriboost 15

if (ps[i].x - ps[i].y > ps[j].x - ps[j].y) break; 5.12 Push Relabel else ++cur[u];
assert(ps[i].x >= ps[j].x && ps[i].y >= ps[j].y); }
edges.push_back({(ps[i].x - ps[j].x) + (ps[i].y }
- ps[j].y), i, j}); struct PushRelabel { bool leftOfMinCut(int a) { return H[a] >= sz(g); }
} struct Edge { };
active[ps[i].x] = i; int dest, back;
} ll f, c;
for (auto &p : ps) { // rotate };
if (rot & 1) p.x *= -1; vector<vector<Edge>> g; 5.13 Tarjan SCC
else swap(p.x, p.y); vector<ll> ec;
} vector<Edge*> cur;
} vector<vi> hs; vi H; const int N = 20002;
return edges; PushRelabel(int n) : g(n), ec(n), cur(n), hs(2*n), H(n) struct tarjan_scc {
} {} int scc[MN], low[MN], d[MN], stacked[MN];
int ticks, current_scc;
void addEdge(int s, int t, ll cap, ll rcap=0) { deque<int> s; // used as stack
if (s == t) return; tarjan_scc() {}
5.10 Minimum Path Cover in DAG g[s].push_back({t, sz(g[t]), 0, cap}); void init() {
g[t].push_back({s, sz(g[s])-1, 0, rcap}); memset(scc, -1, sizeof(scc));
Given a directed acyclic graph G = (V, E), we are to find the } memset(d, -1, sizeof(d));
minimum number of vertex-disjoint paths to cover each vertex in memset(stacked, 0, sizeof(stacked));
V. void addFlow(Edge& e, ll f) { s.clear();
We can construct a bipartite graph G′ = (V out ∪ V in, E ′ ) Edge &back = g[e.dest][e.back]; ticks = current_scc = 0;
if (!ec[e.dest] && f) }
from G, where :
hs[H[e.dest]].push_back(e.dest); void compute(vector<vector<int>> &g, int u) {
e.f += f; e.c -= f; ec[e.dest] += f; d[u] = low[u] = ticks++;
back.f -= f; back.c += f; ec[back.dest] -= f; s.push_back(u);
V out = {v ∈ V : v has positive out − degree} } stacked[u] = true;
ll calc(int s, int t) { for (int i = 0; i < g[u].size(); i++) {
V in = {v ∈ V : v has positive in − degree} int v = g[u][i];
int v = sz(g); H[s] = v; ec[t] = 1;
E ′ = {(u, v) ∈ V out × V in : (u, v) ∈ E} vi co(2*v); co[0] = v-1; if (d[v] == -1) compute(g, v);
rep(i,0,v) cur[i] = g[i].data(); if (stacked[v]) low[u] = min(low[u], low[v]);
Then it can be shown, via König’s theorem, that G’ has a for (Edge& e : g[s]) addFlow(e, e.c); }
matching of size m if and only if there exists n−m vertex-disjoint if (d[u] == low[u]) {
paths that cover each vertex in G, where n is the number of ver- for (int hi = 0;;) { int v;
tices in G and m is the maximum cardinality bipartite mathching while (hs[hi].empty()) if (!hi--) return do {
-ec[s]; v = s.back(); s.pop_back();
in G’.
int u = hs[hi].back(); hs[hi].pop_back(); stacked[v] = false;
while (ec[u] > 0) // discharge u scc[v] = current_scc;
Therefore, the problem can be solved by finding the maximum if (cur[u] == g[u].data() + } while (u != v);
cardinality matching in G’ instead. sz(g[u])) { current_scc++;
NOTE: If the paths are note necesarily disjoints, find the H[u] = 1e9; }
transitive closure and solve the problem for disjoint paths. for (Edge& e : g[u]) if }
(e.c && H[u] > };
H[e.dest]+1)
5.11 Planar Graph (Euler) H[u] = H[e.dest]+1,
cur[u] = &e;
Euler’s formula states that if a finite, connected, planar graph is if (++co[H[u]], !--co[hi] 5.14 Topological Sort
drawn in the plane without any edge intersections, and v is the && hi < v)
number of vertices, e is the number of edges and f is the number rep(i,0,v) if (hi <
of faces (regions bounded by edges, including the outer, infinitely H[i] && H[i] < vi topoSort(const vector<vi>& gr) {
v) vi indeg(sz(gr)), ret;
large region), then: --co[H[i]], for (auto& li : gr) for (int x : li) indeg[x]++;
H[i] = queue<int> q; // use priority_queue for lexic. largest
f +v =e+2 v + 1; ans.
hi = H[u]; rep(i,0,sz(gr)) if (indeg[i] == 0) q.push(i);
It can be extended to non connected planar graphs with c
} else if (cur[u]->c && H[u] == while (!q.empty()) {
connected components: H[cur[u]->dest]+1) int i = q.front(); // top() for priority queue
addFlow(*cur[u], min(ec[u], ret.push_back(i);
f +v =e+c+1 cur[u]->c)); q.pop();
PTIT.Nutriboost 16

for (int x : gr[i]) vector <int> adj_vt[N]; for (int i = 1; i < n; ++i) {
if (--indeg[x] == 0) q.push(x); int vt_root(vector <int> &ver) { int u, v;
} sort(ver.begin(), ver.end(), [&] (const int& x, const int& cin >> u >> v;
return ret; y) { adj[u].push_back(v);
} return st[x] < st[y]; adj[v].push_back(u);
}); }
int m = ver.size();
for (int i = 0; i + 1 < m; ++i) { dfs(1, 0);
5.15 Virtual Tree int new_ver = lca(ver[i], ver[i + 1]);
ver.push_back(new_ver); for (int _q = 1; _q <= q; ++_q) {
} int k;
/* sort(ver.begin(), ver.end(), [&] (const int& x, const int& cin >> k;
Used to solve problem with set of vertices y) {
return st[x] < st[y];
https://www.hackerrank.com/contests/hourrank-15/challenges/kittys-calculations-on-a-tree vector <int> ver;
*/ }); tot = 0;
ver.resize(unique(ver.begin(), ver.end()) - ver.begin()); while (k--) {
const int MOD = 1e9 + 7; int x; cin >> x;
const int N = 2e5 + 5; stack <int> stk; sz[x] = x;
const int K = 18; stk.push(ver[0]); tot = (tot + x) % MOD;
m = ver.size(); ver.push_back(x);
vector <int> adj[N]; for (int i = 1; i < m; ++i) { }
int st[N], en[N], dep[N]; int u = ver[i];
int up[K][N]; while (!stk.empty() && !inside(stk.top(), u)) // check int rt = vt_root(ver);
int timer = 0; if v is in u’s subtree solve(rt, 0);
stk.pop(); cout << ans << "\n";
// LCA adj_vt[stk.top()].push_back(u);
void dfs(int u, int p) { stk.push(u); for (int x : ver) {
st[u] = ++timer; } sz[x] = 0;
for (int v : adj[u]) { return ver[0]; adj_vt[x].clear();
if (v == p) continue; } }
dep[v] = dep[u] + 1; ans = 0;
up[0][v] = u; int sz[N]; }
for (int i = 1; i < K; ++i) int tot; // total special vertices return 0;
up[i][v] = up[i - 1][up[i - 1][v]]; ll ans; }
dfs(v, u); void solve(int u, int p) {
} for (int v : adj_vt[u]) {
en[u] = timer; if (v == p) continue;
solve(v, u);
}
sz[u] = (sz[u] + sz[v]) % MOD; 6 Linear Algebra
int lca(int u, int v) {
if (dep[u] != dep[v]) { }
if (dep[u] < dep[v]) swap(u, v); 6.1 Matrix Determinant
int d = dep[u] - dep[v]; for (int v : adj_vt[u]) {
for (int i = K - 1; i >= 0; --i) if (v == p) continue;
if (d & (1 << i)) int w = dep[v] - dep[u]; double det(vector<vector<double>>& a) {
u = up[i][u]; int mul = 1LL * sz[v] * (tot - sz[v] + MOD) % MOD; int n = sz(a); double res = 1;
} ans += 1LL * w * mul % MOD; rep(i,0,n) {
if (u == v) return u; ans %= MOD; int b = i;
for (int i = K - 1; i >= 0; --i) { } rep(j,i+1,n) if (fabs(a[j][i]) > fabs(a[b][i]))
if (up[i][u] != up[i][v]) { } b = j;
u = up[i][u]; if (i != b) swap(a[i], a[b]), res *= -1;
v = up[i][v]; signed main() { res *= a[i][i];
} cin.tie(0) -> sync_with_stdio(0); if (res == 0) return 0;
} rep(j,i+1,n) {
return up[0][u]; #ifdef JASPER double v = a[j][i] / a[i][i];
} freopen("in1", "r", stdin); if (v != 0) rep(k,i+1,n) a[j][k] -= v *
#endif a[i][k];
bool inside(int u, int v) { }
return st[u] <= st[v] && en[v] <= en[u]; int n, q; }
} cin >> n >> q; return res;
/// }
PTIT.Nutriboost 17

6.2 PolyRoots 7 Maths for (; j & k; k >>= 1) {


j ^= k;
7.1 Factorial Approximate }
j ^= k;
#include "Polynomial.cpp" if (i < j) swap(a[i], a[j]);
Approximate Factorial:
}
vector<double> polyRoots(Poly p, double xmin, double xmax) { √ n
if (sz(p.a) == 2) { return {-p.a[0]/p.a[1]}; } n! = 2.π.n.( )n (1) for (int len = 2; len <= n; len <<= 1) {
vector<double> ret; e
double ang = (2.0 * PI / len) * (inv? -1 : 1);
Poly der = p; cd wlen(cos(ang), sin(ang));
der.diff(); 7.2 Factorial
auto dr = polyRoots(der, xmin, xmax); for (int i = 0; i < n; i += len) {
dr.push_back(xmin-1); n 123 4 5 6 7 8 9 10
cd w(1);
dr.push_back(xmax+1); n! 1 2 6 24 120 720 5040 40320 362880 3628800 for (int j = 0; j < len / 2; ++j) {
sort(all(dr)); n 11 12 13 14 15 16 17 cd u = a[i + j];
rep(i,0,sz(dr)-1) { n! 4.0e7 4.8e8 6.2e9 8.7e10 1.3e12 2.1e13 3.6e14 cd v = a[i + j + len / 2] * w;
double l = dr[i], h = dr[i+1]; a[i + j] = u + v;
bool sign = p(l) > 0; n 20 25 30 40 50 100 150 171
a[i + j + len / 2] = u - v;
if (sign ^ (p(h) > 0)) { n! 2e18 2e25 3e32 8e47 3e64 9e157 6e262 >DBL MAX w = w * wlen;
rep(it,0,60) { // while (h - l > 1e-8) }
double m = (l + h) / 2, f = p(m); }
if ((f <= 0) ^ sign) l = m; 7.3 Fast Fourier Transform
}
else h = m;
} if (inv) {
ret.push_back((l + h) / 2); // Note:
// - When convert double -> int, use my_round(x) which handles for (cd &x : a) {
} x.a /= n;
} negative numbers
// correctly. x.b /= n;
return ret; }
} //
// Tested: }
// - https://open.kattis.com/problems/polymul2 }
// - https://www.spoj.com/problems/TSUM/
// - (bigint mul) https://www.spoj.com/problems/VFMUL/ vector <ll> fft(vector <ll>& a, vector <ll>& b) {
// - (bigint mul) https://www.spoj.com/problems/MUL/ vector <cd> fa(a.begin(), a.end());
// - (string matching) https://www.spoj.com/problems/MAXMATCH vector <cd> fb(b.begin(), b.end());
//
// FFT {{{ int n = 1;
// Source: while (n < (int) (fa.size() + fb.size())) n <<= 1;
6.3 Polynomial https://github.com/kth-competitive-programming/kactl/blob/main/content/numerical/FastFourierTransform.h
class FFT { fa.resize(n);
public: fb.resize(n);
struct cd { vector <cd> fc(n);
struct Poly {
vector<double> a; double a, b;
cd(double _a = 0, double _b = 0) : a(_a), b(_b) {} dft(fa, false);
double operator()(double x) const {
dft(fb, false);
double val = 0;
for (int i = sz(a); i--;) (val *= x) += a[i]; const cd operator + (const cd &c) const { return cd(a +
c.a, b + c.b); } for (int i = 0; i < n; ++i)
return val;
const cd operator - (const cd &c) const { return cd(a - fc[i] = fa[i] * fb[i];
}
void diff() { c.a, b - c.b); }
const cd operator * (const cd &c) const { return cd(a * dft(fc, true);
rep(i,1,sz(a)) a[i-1] = i*a[i];
a.pop_back(); c.a - b * c.b, a * c.b + b * c.a); }
}; vector <ll> res(n);
}
const double PI = acos(-1); for (int i = 0; i < n; ++i)
void divroot(double x0) {
res[i] = 1LL * (round(fc[i].a) > 0.5);
double b = a.back(), c; a.back() = 0;
for(int i=sz(a)-1; i--;) c = a[i], a[i] = void dft(vector <cd>& a, bool inv) {
int n = (int) a.size(); while (!res.empty() && res.back() == 0)
a[i+1]*x0+b, b=c;
if (n == 1) res.pop_back();
a.pop_back();
} return;
return res;
};
for (int i = 1, j = 0; i < n; ++i) { }
int k = n >> 1; };
PTIT.Nutriboost 18

7.4 General purpose numbers # on k existing trees of size ni : n1 n2 · · · nk nk−2 7.7 Mobius
# with degrees di : (n − 2)!/((d1 − 1)! · · · (dn − 1)!)
Bernoulli numbers
t Catalan numbers
EGF of Bernoulli numbers is B(t) = et −1
(FFT-able).
B[0, . . .] = [1, − 21 , 16 , 0, − 30
1 1
, 0, 42 , . . .] 1 2n 2n  2n  (2n)! /*
Cn = = − =
Sums of powers: n+1 n n n+1 (n + 1)!n! Nu c c l s chnh phng khc 1 -> 0
n m
Nu c l c nguyn t -> -1
1 m + 1 
Nu c chn c nguyn t -> 1
2(2n + 1)
X m
X m+1−k
n = Bk · (n + 1)
X
m+1 k C0 = 1, Cn+1 = Cn , Cn+1 = Ci Cn−i mob(1) = 1;
i=1 k=0 n+2 */
Euler-Maclaurin formula for infinite sums: Cn = 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, . . .
∞ ∞ const int N = 5e5 + 9;
Z ∞ Bk (k−1) [noitemsep]sub-diagonal monotone paths in an n × n grid.
X X
f (i) = f (x)dx − f (m) int mob[N];
i=m m k=1
k! strings with n pairs of parenthesis, correctly nested. binary void mobius() {
trees with with n + 1 leaves (0 or 2 children). ordered trees mob[1] = 1;
∞ f (m) f ′ (m) f ′′′ (m)
Z
(5) for (int i = 2; i < N; i++){
≈ f (x)dx + − + + O(f (m)) with n + 1 vertices. ways a convex polygon with n + 2 mob[i]--;
m 2 12 720
sides can be cut into triangles by connecting vertices with for (int j = i + i; j < N; j += i) {
Stirling numbers of the first kind straight lines. permutations of [n] with no 3-term increas- mob[j] -= mob[i];
Number of permutations on n items with k cycles. ing subseq. }
}
c(n, k) = c(n − 1, k − 1) + (n − 1)c(n − 1, k), c(0, 0) = 1 }
Pn k 7.5 Lucas Theorem
k=0 c(n, k)x = x(x + 1) . . . (x + n − 1)
For non-negative integers m and n and a prime p, the following
c(8, k) = 8, 0, 5040, 13068, 13132, 6769, 1960, 322, 28, 1 congruence relation holds: :
Stirling numbers of the second kind
Partitions of n distinct elements into exactly k groups. m k 
Y mi 
≡ (mod p),
S(n, k) = S(n − 1, k − 1) + kS(n − 1, k) n i=0
ni

S(n, 1) = S(n, n) = 1 where :


k
1 m = mk pk + mk−1 pk−1 + · · · + m1 p + m0 ,
X k
S(n, k) = (−1)k−j jn
k! j=0 j
and : 7.8 Multinomial
Eulerian numbers n = nk pk + nk−1 pk−1 + · · · + n1 p + n0
Number of permutations π ∈ Sn in which exactly k elements are
greater than the previous element. k j:s s.t. π(j) > π(j + 1), are the base p expansions of m and n respectively. This uses the
convention that m

k + 1 j:s s.t. π(j) ≥ j, k j:s s.t. π(j) > j. n
= 0 if m ≤ n.
/**
* Description: Computes $\displaystyle \binom{k_1 + \dots +
E(n, k) = (n − k)E(n − 1, k − 1) + (k + 1)E(n − 1, k) 7.6 Math k_n}{k_1, k_2, \dots, k_n} = \frac{(\sum
k_i)!}{k_1!k_2!...k_n!}$.
E(n, 0) = E(n, n − 1) = 1 Number of Spanning Trees * Status: Tested on kattis:lexicography
Create an N × N matrix mat, and for each edge a → b ∈ G, do */
k n + 1 #pragma once
X mat[a][b]--, mat[b][b]++ (and mat[b][a]--, mat[a][a]++ if
E(n, k) = (−1)j (k + 1 − j)n
j G is undirected). Remove the ith row and column and take the long long multinomial(vector<int>& v) {
j=0
determinant; this yields the number of directed spanning trees long long c = 1, m = v.empty() ? 1 : v[0];
Bell numbers rooted at i (if G is undirected, remove any row/column). for (long long i = 1; i < v.size(); i++) {
Total number of partitions of n distinct elements. B(n) = Erdős–Gallai theorem for (long long j = 0; j < v[i]; j++) {
1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, . . . . For p prime, A simple graph with node degrees d1 ≥ · · · ≥ dn exists iff c = c * ++m / (j + 1);
}
d1 + · · · + dn is even and for every k = 1 . . . n, }
B(pm + n) ≡ mB(n) + B(n + 1) (mod p)
return c;
k
X n
X }
Labeled unrooted trees di ≤ k(k − 1) + min(di , k).
# on n vertices: nn−2 i=1 i=k+1
PTIT.Nutriboost 19

7.9 Number Theoretic Transform vector<int> multiply(vector<int> &a, vector<int> &b, int eq = a = multiply(a, a, 1);
0) { p >>= 1;
int need = a.size() + b.size() - 1; }
#include<bits/stdc++.h> int p = 0; return res;
using namespace std; while((1 << p) < need) p++; }
ensure_base(p); int main() {
const int N = 3e5 + 9, mod = 998244353; int sz = 1 << p; int n, k; cin >> n >> k;
vector<base> A, B; vector<int> a(10, 0);
struct base { if(sz > (int)A.size()) A.resize(sz); while(k--) {
double x, y; for(int i = 0; i < (int)a.size(); i++) { int m; cin >> m;
base() { x = y = 0; } int x = (a[i] % mod + mod) % mod; a[m] = 1;
base(double x, double y): x(x), y(y) { } A[i] = base(x & ((1 << 15) - 1), x >> 15); }
}; } vector<int> ans = pow(a, n / 2);
inline base operator + (base a, base b) { return base(a.x + fill(A.begin() + a.size(), A.begin() + sz, base{0, 0}); int res = 0;
b.x, a.y + b.y); } fft(A, sz); for(auto x: ans) res = (res + 1LL * x * x % mod) % mod;
inline base operator - (base a, base b) { return base(a.x - if(sz > (int)B.size()) B.resize(sz); cout << res << ’\n’;
b.x, a.y - b.y); } if(eq) copy(A.begin(), A.begin() + sz, B.begin()); return 0;
inline base operator * (base a, base b) { return base(a.x * b.x else { }
- a.y * b.y, a.x * b.y + a.y * b.x); } for(int i = 0; i < (int)b.size(); i++) { //https://codeforces.com/contest/1096/problem/G
inline base conj(base a) { return base(a.x, -a.y); } int x = (b[i] % mod + mod) % mod;
int lim = 1; B[i] = base(x & ((1 << 15) - 1), x >> 15);
vector<base> roots = {{0, 0}, {1, 0}}; }
fill(B.begin() + b.size(), B.begin() + sz, base{0, 0});
vector<int> rev = {0, 1};
fft(B, sz);
7.10 Others
const double PI = acosl(- 1.0);
}
void ensure_base(int p) { Cycles Let gS (n) be the number of n-permutations whose cycle
if(p <= lim) return; double ratio = 0.25 / sz;
base r2(0, - 1), r3(ratio, 0), r4(0, - ratio), r5(0, 1); lengths all belong to the set S. Then
rev.resize(1 << p);
for(int i = 0; i < (1 << p); i++) rev[i] = (rev[i >> 1] >> 1) for(int i = 0; i <= (sz >> 1); i++) {  
int j = (sz - i) & (sz - 1); ∞
+ ((i & 1) << (p - 1));
base a1 = (A[i] + conj(A[j])), a2 = (A[i] - conj(A[j])) *
X xn X xn
roots.resize(1 << p); gS (n) = exp  
while(lim < p) { r2;
n=0
n! n∈S
n
double angle = 2 * PI / (1 << (lim + 1)); base b1 = (B[i] + conj(B[j])) * r3, b2 = (B[i] -
for(int i = 1 << (lim - 1); i < (1 << lim); i++) { conj(B[j])) * r4;
roots[i << 1] = roots[i]; if(i != j) { Derangements Permutations of a set such that none of the
double angle_i = angle * (2 * i + 1 - (1 << lim)); base c1 = (A[j] + conj(A[i])), c2 = (A[j] - conj(A[i])) * elements appear in their original position.
roots[(i << 1) + 1] = base(cos(angle_i), sin(angle_i)); r2;
} base d1 = (B[j] + conj(B[i])) * r3, d2 = (B[j] -  
n!
lim++; conj(B[i])) * r4; D(n) = (n−1)(D(n−1)+D(n−2)) = nD(n−1)+(−1)n =
} A[i] = c1 * d1 + c2 * d2 * r5; e
} B[i] = c1 * d2 + c2 * d1;
void fft(vector<base> &a, int n = -1) { }
A[j] = a1 * b1 + a2 * b2 * r5;
Burnside’s lemma Given a group G of symmetries and a
if(n == -1) n = a.size(); set X, the number of elements of X up to symmetry equals
assert((n & (n - 1)) == 0); B[j] = a1 * b2 + a2 * b1;
int zeros = __builtin_ctz(n); }
fft(A, sz); fft(B, sz); 1 X
ensure_base(zeros);
vector<int> res(need); |X g |,
int shift = lim - zeros; |G| g∈G
for(int i = 0; i < n; i++) if(i < (rev[i] >> shift)) for(int i = 0; i < need; i++) {
swap(a[i], a[rev[i] >> shift]); long long aa = A[i].x + 0.5;
for(int k = 1; k < n; k <<= 1) { long long bb = B[i].x + 0.5; where X g are the elements fixed by g (g.x = x).
for(int i = 0; i < n; i += 2 * k) { long long cc = A[i].y + 0.5;
res[i] = (aa + ((bb % mod) << 15) + ((cc % mod) << 30))%mod; If f (n) counts “configurations” (of some sort) of length n, we
for(int j = 0; j < k; j++) {
} can ignore rotational symmetry using G = Zn to get
base z = a[i + j + k] * roots[j + k];
a[i + j + k] = a[i + j] - z; return res;
a[i + j] = a[i + j] + z; } n−1
1 X 1X
} g(n) = f (gcd(n, k)) = f (k)ϕ(n/k).
} vector<int> pow(vector<int>& a, int p) { n k=0 n
k|n
} vector<int> res;
} res.emplace_back(1);
while(p) {
//eq = 0: 4 FFTs in total
if(p & 1) res = multiply(res, a); 7.11 Primitive Root
//eq = 1: 3 FFTs in total
PTIT.Nutriboost 20

7.12 Sieve 1e9 for (int i = beg; i < end; i += prod) {


#include<bits/stdc++.h> copy(pre.begin(), pre.end(), pblock + i);
using namespace std; }
#include<bits/stdc++.h> if (beg == 0) pblock[0] &= 0xFE;
int totient(int n) { for (size_t pi = pbeg; pi < sprimes.size(); ++pi) {
int ans = n; using namespace std;
// credit: min_25 auto& pp = sprimes[pi];
for (int i = 2; i * i <= n; i++) { const int p = pp.p;
if (n % i == 0) { // takes 0.5s for n = 1e9
vector<int> sieve(const int N, const int Q = 17, const int L = for (int t = 0; t < 8; ++t) {
while (n % i == 0) n /= i; int i = pp.pos[t]; const unsigned char m = ~(1 << t);
ans = ans / i * (i - 1); 1 << 15) {
static const int rs[] = {1, 7, 11, 13, 17, 19, 23, 29}; for (; i < end; i += p) pblock[i] &= m;
} pp.pos[t] = i;
} struct P {
P(int p) : p(p) {} }
if (n > 1) ans = ans / n * (n - 1); }
return ans; int p; int pos[8];
}; for (int i = beg; i < end; ++i) {
} for (int m = pblock[i]; m > 0; m &= m - 1) {
int power(int a, int b, int m) { auto approx_prime_count = [] (const int N) -> int {
return N > 60184 ? N / (log(N) - 1.1) primes[psize++] = i * 30 + rs[__builtin_ctz(m)];
int res = 1; }
while (b > 0) { : max(1., N / (log(N) - 1.11)) + 1;
}; }
if (b & 1) res = 1LL * res * a % m; }
a = 1LL * a * a % m; assert(psize <= rsize);
b >>= 1; const int v = sqrt(N), vv = sqrt(v);
vector<bool> isp(v + 1, true); while (psize > 0 && primes[psize - 1] > N) --psize;
} primes.resize(psize);
return res; for (int i = 2; i <= vv; ++i) if (isp[i]) {
for (int j = i * i; j <= v; j += i) isp[j] = false; return primes;
} }
// g is a primitive root modulo p if and only if for any }
integer a such that int32_t main() {
// gcd(a, p) = 1, there exists an integer k such that: g^k = const int rsize = approx_prime_count(N + 30);
vector<int> primes = {2, 3, 5}; int psize = 3; ios_base::sync_with_stdio(0);
a(mod p). cin.tie(0);
// primitive root modulo n exists iff n = 1, 2, 4 or n = p^k or primes.resize(rsize);
int n, a, b; cin >> n >> a >> b;
2 * p^k for some odd prime p auto primes = sieve(n);
int primitive_root(int p) { vector<P> sprimes; size_t pbeg = 0;
int prod = 1; vector<int> ans;
// first check if primitive root exists or not. I have for (int i = b; i < primes.size() && primes[i] <= n; i += a)
omitted this part here for (int p = 7; p <= v; ++p) {
if (!isp[p]) continue; ans.push_back(primes[i]);
vector<int> fact; cout << primes.size() << ’ ’ << ans.size() << ’\n’;
int phi = totient(p), n = phi; if (p <= Q) prod *= p, ++pbeg, primes[psize++] = p;
auto pp = P(p); for (auto x: ans) cout << x << ’ ’; cout << ’\n’;
for (int i = 2; i * i <= n; ++i) { return 0;
if (n % i == 0) { for (int t = 0; t < 8; ++t) {
int j = (p <= Q) ? p : p * p; }
fact.push_back(i); // https://judge.yosupo.jp/problem/enumerate_primes
while (n % i == 0) n /= i; while (j % 30 != rs[t]) j += p << 1;
} pp.pos[t] = j / 30;
} }
if (n > 1) fact.push_back(n); sprimes.push_back(pp);
} 7.13 Sigma Function
for (int res = 2; res <= p; ++res) { // this loop will run at
most (logp ^ 6) times i.e. until a root is found The Sigma Function is defined as:
bool ok = true; vector<unsigned char> pre(prod, 0xFF);
// check if this is a primitive root modulo p for (size_t pi = 0; pi < pbeg; ++pi) { X
for (size_t i = 0; i < fact.size() && ok; ++i) auto pp = sprimes[pi]; const int p = pp.p; σx (n) = dx
ok &= power(res, phi / fact[i], p) != 1; for (int t = 0; t < 8; ++t) { d|n
if (ok) return res; const unsigned char m = ~(1 << t);
for (int i = pp.pos[t]; i < prod; i += p) pre[i] &= m; when x = 0 is called the divisor function, that counts the
} number of positive divisors of n.
return -1; }
} } Now, we are interested in find
int32_t main() { X
ios_base::sync_with_stdio(0); const int block_size = (L + prod - 1) / prod * prod; σ0 (d)
cin.tie(0); vector<unsigned char> block(block_size); unsigned char*
d|n
cout << primitive_root(200003) << ’\n’; pblock = block.data();
return 0; const int M = (N + 29) / 30; If n is written as prime factorization:
}
for (int beg = 0; beg < M; beg += block_size, pblock -= k
// https://cp-algorithms.com/algebra/primitive-root.html Y e
block_size) { n= Pi k
int end = min(M, beg + block_size);
i=1
PTIT.Nutriboost 21

We can demonstrate that: bool is_leap(int y) { return y % 400 == 0 || (y % 4 == 0 && y % }


100 != 0); }
k // number of days in blocks of years return (z + n) % n;
X Y
σ0 (d) = g(ek + 1) const int p400 = 400*365 + leap_years(400); }
const int p100 = 100*365 + leap_years(100);
d|n i=1 const int p4 = 4*365 + 1;
const int p1 = 365;
where g(x) is the sum of the first x positive numbers: int date_to_days(int d, int m, int y) 9.2 Convolution
{
g(x) = (x ∗ (x + 1))/2 return (y - 1) * 365 + leap_years(y - 1) + (is_leap(y) ? B[m]
: A[m]) + d; typedef long long int LL;
} typedef pair<LL, LL> PLL;
7.14 SuperExp void days_to_date(int days, int &d, int &m, int &y)
{ inline bool is_pow2(LL x) {
bool top100; // are we in the top 100 years of a 400 block? return (x & (x-1)) == 0;
// a^^1 = a -> a^^(k + 1) = a(a^^k) = ? bool top4; // are we in the top 4 years of a 100 block? }
// bool top1; // are we in the top year of a 4 block?
inline int ceil_log2(LL x) {
#define rd(a, b) uniform_int_distribution<ll>(a, b)(rnd) y = 1; int ans = 0;
mt19937 top100 = top4 = top1 = false; --x;
rnd(chrono::steady_clock::now().time_since_epoch().count()); while (x != 0) {
vector <int> phi; y += ((days-1) / p400) * 400; x >>= 1;
int cap(int a, int m) { d = (days-1) % p400 + 1; ans++;
return a < m ? a : a - (a - m) / m * m; }
} if (d > p100*3) top100 = true, d -= 3*p100, y += 300; return ans;
int cal(int b, int t) { else y += ((d-1) / p100) * 100, d = (d-1) % p100 + 1; }
if (phi[t] == 1) return 1;
if (b == 1) return cap(a, phi[t]); if (d > p4*24) top4 = true, d -= 24*p4, y += 24*4; /* Returns the convolution of the two given vectors in time
int c = cal(b - 1, t + 1); else y += ((d-1) / p4) * 4, d = (d-1) % p4 + 1; proportional to n*log(n).
return power(a, c, phi[t]); * The number of roots of unity to use nroots_unity must be set
} if (d > p1*3) top1 = true, d -= p1*3, y += 3; so that the product of the first
void solve() { else y += (d-1) / p1, d = (d-1) % p1 + 1; * nroots_unity primes of the vector nth_roots_unity is greater
int m = 1e8; than the maximum value of the
phi.emplace_back(m); const int *ac = top1 && (!top4 || top100) ? B : A; * convolution. Never use sizes of vectors bigger than 2^24, if
while (phi.back() > 1) { for (m = 1; m < 12; ++m) if (d <= ac[m + 1]) break; you need to change the values of
phi.emplace_back(totient(phi.back())); d -= ac[m]; * the nth roots of unity to appropriate primes for those sizes.
} } */
} vector<LL> convolve(const vector<LL> &a, const vector<LL> &b,
int nroots_unity = 2) {
int N = 1 << ceil_log2(a.size() + b.size());
vector<LL> ans(N,0), fA(N), fB(N), fC(N);
9 Number Theory LL modulo = 1;
8 Misc for (int times = 0; times < nroots_unity; times++) {
9.1 Chinese Remainder Theorem fill(fA.begin(), fA.end(), 0);
8.1 Dates fill(fB.begin(), fB.end(), 0);
for (int i = 0; i < a.size(); i++) fA[i] = a[i];
/** for (int i = 0; i < b.size(); i++) fB[i] = b[i];
// * Chinese remainder theorem. LL prime = nth_roots_unity[times].first;
// Time - Leap years * Find z such that z % x[i] = a[i] for all i. LL inv_modulo = mod_inv(modulo % prime, prime);
// * */ LL normalize = mod_inv(N, prime);
long long crt(vector<long long> &a, vector<long long> &x) { ntfft(fA, 1, nth_roots_unity[times]);
// A[i] has the accumulated number of days from months previous long long z = 0; ntfft(fB, 1, nth_roots_unity[times]);
to i long long n = 1; for (int i = 0; i < N; i++) fC[i] = (fA[i] * fB[i]) % prime;
const int A[13] = { 0, 0, 31, 59, 90, 120, 151, 181, 212, 243, for (int i = 0; i < x.size(); ++i) ntfft(fC, -1, nth_roots_unity[times]);
273, 304, 334 }; n *= x[i]; for (int i = 0; i < N; i++) {
// same as A, but for a leap year LL curr = (fC[i] * normalize) % prime;
const int B[13] = { 0, 0, 31, 60, 91, 121, 152, 182, 213, 244, for (int i = 0; i < a.size(); ++i) { LL k = (curr - (ans[i] % prime) + prime) % prime;
274, 305, 335 }; long long tmp = (a[i] * (n / x[i])) % n; k = (k * inv_modulo) % prime;
// returns number of leap years up to, and including, y tmp = (tmp * mod_inv(n / x[i], x[i])) % n; ans[i] += modulo * k;
int leap_years(int y) { return y / 4 - y / 100 + y / 400; } z = (z + tmp) % n; }
PTIT.Nutriboost 22

modulo *= prime; if (x > maxx) return 0;


} long long lx1 = x; void ext_euclid(long long a, long long b, long long &x, long
return ans; long &y, long long &g) {
} shift_solution(x, y, a, b, (maxx - x) / b); x = 0, y = 1, g = b;
if (x > maxx) shift_solution(x, y, a, b, -sign_b); long long m, n, q, r;
long long rx1 = x; for (long long u = 1, v = 0; a != 0; g = a, a = r) {
q = g / a, r = g % a;
shift_solution(x, y, a, b, -(miny - y) / a); m = x - u * q, n = y - v * q;
9.3 Diophantine Equations x = u, y = v, u = m, v = n;
if (y < miny) shift_solution(x, y, a, b, -sign_a);
if (y > maxy) return 0; }
long long lx2 = x; }
long long gcd(long long a, long long b, long long &x, long long
&y) {
if (a == 0) { shift_solution(x, y, a, b, -(maxy - y) / a);
x = 0; if (y > maxy) shift_solution(x, y, a, b, sign_a);
y = 1; long long rx2 = x; 9.6 Fast Eratosthenes
return b;
} if (lx2 > rx2) swap(lx2, rx2);
long long x1, y1; long long lx = max(lx1, lx2); const int LIM = 1e6;
long long d = gcd(b % a, a, x1, y1); long long rx = min(rx1, rx2); bitset<LIM> isPrime;
x = y1 - (b / a) * x1; vi eratosthenes() {
y = x1; if (lx > rx) return 0; const int S = (int)round(sqrt(LIM)), R = LIM / 2;
return d; return (rx - lx) / abs(b) + 1; vi pr = {2}, sieve(S+1);
} } pr.reserve(int(LIM/log(LIM)*1.1));
vector<pii> cp;
bool find_any_solution(long long a, long long b, long long c, for (int i = 3; i <= S; i += 2) if (!sieve[i]) {
long long &x0, cp.push_back({i, i * i / 2});
long long &y0, long long &g) { 9.4 Discrete Logarithm for (int j = i * i; j <= S; j += 2 * i) sieve[j]
g = gcd(abs(a), abs(b), x0, y0); = 1;
if (c % g) { }
return false; for (int L = 1; L <= R; L += S) {
} // Computes x which a ^ x = b mod n. array<bool, S> block{};
for (auto &[p, idx] : cp)
x0 *= c / g; long long d_log(long long a, long long b, long long n) { for (int i=idx; i < S+L; idx = (i+=p))
y0 *= c / g; long long m = ceil(sqrt(n)); block[i-L] = 1;
if (a < 0) x0 = -x0; long long aj = 1; rep(i,0,min(S, R - L))
if (b < 0) y0 = -y0; map<long long, long long> M; if (!block[i]) pr.push_back((L + i) * 2 +
return true; for (int i = 0; i < m; ++i) { 1);
} if (!M.count(aj)) }
M[aj] = i; for (int i : pr) isPrime[i] = 1;
void shift_solution(long long &x, long long &y, long long a, aj = (aj * a) % n; return pr;
long long b, } }
long long cnt) {
x += cnt * b; long long coef = mod_pow(a, n - 2, n);
y -= cnt * a; coef = mod_pow(coef, m, n);
} // coef = a ^ (-m) 9.7 Miller - Rabin
long long gamma = b;
long long find_all_solutions(long long a, long long b, long for (int i = 0; i < m; ++i) {
long c, if (M.count(gamma)) { const int rounds = 20;
long long minx, long long maxx, long long miny, return i * m + M[gamma];
long long maxy) { } else { // checks whether a is a witness that n is not prime, 1 < a < n
long long x, y, g; gamma = (gamma * coef) % n; bool witness(long long a, long long n) {
if (!find_any_solution(a, b, c, x, y, g)) return 0; } // check as in Miller Rabin Primality Test described
a /= g; } long long u = n - 1;
b /= g; return -1; int t = 0;
} while (u % 2 == 0) {
long long sign_a = a > 0 ? +1 : -1; t++;
long long sign_b = b > 0 ? +1 : -1; u >>= 1;
}
shift_solution(x, y, a, b, (minx - x) / b); 9.5 Ext Euclidean long long next = mod_pow(a, u, n);
if (x < minx) shift_solution(x, y, a, b, sign_b); if (next == 1) return false;
PTIT.Nutriboost 23

long long last; long long x, y, i = 1, k = 2, d;


for (int i = 0; i < t; ++i) { /* The following vector of pairs contains pairs (prime, x = y = rand() % n;
last = next; generator) while (1) {
next = mod_mul(last, last, n); * where the prime has an Nth root of unity for N being a power ++i;
if (next == 1) { of two. x = mod_mul(x, x, n);
return last != n - 1; * The generator is a number g s.t g^(p-1)=1 (mod p) x += 2;
} * but is different from 1 for all smaller powers */ if (x >= n) x -= n;
} vector<PLL> nth_roots_unity { if (x == y) return 1;
return next != 1; {1224736769,330732430},{1711276033,927759239},{167772161,167489322}, d = __gcd(abs(x - y), n);
} {469762049,343261969},{754974721,643797295},{1107296257,883865065}}; if (d != 1) return d;
if (i == k) {
PLL ext_euclid(LL a, LL b) { y = x;
// Checks if a number is prime with prob 1 - 1 / (2 ^ it) if (b == 0) k *= 2;
// D(miller_rabin(99999999999999997LL) == 1); return make_pair(1,0); }
// D(miller_rabin(9999999999971LL) == 1); pair<LL,LL> rc = ext_euclid(b, a % b); }
// D(miller_rabin(7907) == 1); return make_pair(rc.second, rc.first - (a / b) * rc.second); return 1;
bool miller_rabin(long long n, int it = rounds) { } }
if (n <= 1) return false;
if (n == 2) return true; //returns -1 if there is no unique modular inverse
if (n % 2 == 0) return false; LL mod_inv(LL x, LL modulo) { // Returns a list with the prime divisors of n
for (int i = 0; i < it; ++i) { PLL p = ext_euclid(x, modulo); vector<long long> factorize(long long n) {
long long a = rand() % (n - 1) + 1; if ( (p.first * x + p.second * modulo) != 1 ) vector<long long> ans;
if (witness(a, n)) { return -1; if (n == 1)
return false; return (p.first+modulo) % modulo; return ans;
} } if (miller_rabin(n)) {
} ans.push_back(n);
return true; } else {
} //Number theory fft. The size of a must be a power of 2 long long d = 1;
void ntfft(vector<LL> &a, int dir, const PLL &root_unity) { while (d == 1)
int n = a.size(); d = pollard_rho(n);
LL prime = root_unity.first; vector<long long> dd = factorize(d);
9.8 Mod Integer LL basew = mod_pow(root_unity.second, (prime-1) / n, prime); ans = factorize(n / d);
if (dir < 0) basew = mod_inv(basew, prime); for (int i = 0; i < dd.size(); ++i)
for (int m = n; m >= 2; m >>= 1) { ans.push_back(dd[i]);
template<class T, T mod> int mh = m >> 1; }
struct mint_t { LL w = 1; return ans;
T val; for (int i = 0; i < mh; i++) { }
mint_t() : val(0) {} for (int j = i; j < n; j += m) {
mint_t(T v) : val(v % mod) {} int k = j + mh;
LL x = (a[j] - a[k] + prime) % prime;
mint_t operator + (const mint_t& o) const { a[j] = (a[j] + a[k]) % prime; 9.11 Primes
return (val + o.val) % mod; a[k] = (w * x) % prime;
} }
mint_t operator - (const mint_t& o) const { w = (w * basew) % prime; namespace primes {
return (val - o.val) % mod; } const int MP = 100001;
} basew = (basew * basew) % prime; bool sieve[MP];
mint_t operator * (const mint_t& o) const { } long long primes[MP];
return (val * o.val) % mod; int i = 0; int num_p;
} for (int j = 1; j < n - 1; j++) { void fill_sieve() {
}; for (int k = n >> 1; k > (i ^= k); k >>= 1); num_p = 0;
if (j < i) swap(a[i], a[j]); sieve[0] = sieve[1] = true;
typedef mint_t<long long, 998244353> mint; } for (long long i = 2; i < MP; ++i) {
} if (!sieve[i]) {
primes[num_p++] = i;
for (long long j = i * i; j < MP; j += i)
9.9 Number Theoretic Transform sieve[j] = true;
9.10 Pollard Rho Factorize }
}
typedef long long int LL; }
typedef pair<LL, LL> PLL; long long pollard_rho(long long n) {
PTIT.Nutriboost 24

// Finds prime numbers between a and b, using basic primes up 9.13 Totient 10.2 Discrete Distributions
to sqrt(b)
// a must be greater than 1. 10.2.1 Binomial distribution
vector<long long> seg_sieve(long long a, long long b) { long long totient(long long n) {
long long ant = a; if (n == 1) return 0; The number of successes in n independent yes/no experiments,
a = max(a, 3LL); long long ans = n; each which yields success with probability p is Bin(n, p), n =
vector<bool> pmap(b - a + 1); for (int i = 0; primes[i] * primes[i] <= n; ++i) { 1, 2, . . . , 0 ≤ p ≤ 1.
long long sqrt_b = sqrt(b); if ((n % primes[i]) == 0) {
for (int i = 0; i < num_p; ++i) { while ((n % primes[i]) == 0) n /= primes[i]; n
long long p = primes[i]; ans -= ans / primes[i]; p(k) = pk (1 − p)n−k
}
k
if (p > sqrt_b) break;
long long j = (a + p - 1) / p; }
for (long long v = (j == 1) ? p + p : j * p; v <= b; v += if (n > 1) { µ = np, σ 2 = np(1 − p)
p) { ans -= ans / n;
pmap[v - a] = true; } Bin(n, p) is approximately Po(np) for small p.
} return ans;
} } 10.2.2 First success distribution
vector<long long> ans;
if (ant == 2) ans.push_back(2); The number of trials needed to get the first success in indepen-
int start = a % 2 ? 0 : 1; dent yes/no experiments, each wich yields success with probabil-
for (int i = start, I = b - a + 1; i < I; i
if (pmap[i] == false)
+= 2) 10 Probability and Statistics ity p is Fs(p), 0 ≤ p ≤ 1.
ans.push_back(a + i);
return ans; 10.1 Continuous Distributions p(k) = p(1 − p)k−1 , k = 1, 2, . . .
}
10.1.1 Uniform distribution
1 2 1−p
vector<pair<int, int>> factor(int n) { µ= ,σ =
If the probability density function is constant between a and b p p2
vector<pair<int, int>> ans;
if (n == 0) return ans;
and 0 elsewhere it is U(a, b), a < b.
for (int i = 0; primes[i] * primes[i] <= n; ++i) {  1 10.2.3 Poisson distribution
if ((n % primes[i]) == 0) { b−a
a<x<b
f (x) = The number of events occurring in a fixed period of time t if these
int expo = 0; 0 otherwise
while ((n % primes[i]) == 0) { events occur with a known average rate κ and independently of
expo++; a+b 2 (b − a)2 the time since the last event is Po(λ), λ = tκ.
n /= primes[i]; µ= ,σ =
} 2 12
λk
ans.emplace_back(primes[i], expo); p(k) = e−λ , k = 0, 1, 2, . . .
} 10.1.2 Exponential distribution k!
}
The time between events in a Poisson process is Exp(λ), λ > 0. µ = λ, σ 2 = λ
if (n > 1) {
λe−λx x ≥ 0

ans.emplace_back(n, 1);
} f (x) = 10.3 Probability Theory
0 x<0
return ans;
}
1 2 1 Let X be a discrete random variable with probability pX (x)
} µ= ,σ = 2 of assuming the value P x. It will then have an expected value
λ λ
(mean) µ = E(X) =P x xpX (x) and variance σ 2 = V (X) =
10.1.3 Normal distribution E(X 2 ) − (E(X))2 = x (x − E(X))2 pX (x) where σ is the stan-
dard deviation. If X is instead continuous it will have a proba-
9.12 Totient Sieve Most real random values with mean µ and variance σ 2 are well bility density function fX (x) and the sums above will instead be
described by N (µ, σ 2 ), σ > 0. integrals with pX (x) replaced by fX (x).
Expectation is linear:
for (int i = 1; i < MN; i++) 1 (x−µ)2

phi[i] = i; f (x) = √ e 2σ 2
2πσ 2 E(aX + bY ) = aE(X) + bE(Y )
for (int i = 1; i < MN; i++)
if (!sieve[i]) // is prime If X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ) then For independent X and Y ,
for (int j = i; j < MN; j += i)
phi[j] -= phi[j] / i; aX1 + bX2 + c ∼ N (µ1 + µ2 + c, a2 σ12 + b2 σ22 ) V (aX + bY ) = a2 V (X) + b2 V (Y ).
PTIT.Nutriboost 25

11 Strings }; t->sum += it->sign;


}
11.1 Hashing struct String { int qt = 0;
string str; for(Node *&n : root->next) {
int sign; if(n != nullptr) {
struct H { }; n->fail = root;
typedef uint64_t ull; que[qt ++] = n;
ull x; H(ull x=0) : x(x) {} public: } else {
#define OP(O,A,B) H operator O(H o) { ull r = x; asm \ //totalLen = sum of (len + 1) n = root;
(A "addq %%rdx, %0\n adcq $0,%0" : "+a"(r) : B); return void init(int totalLen) { }
r; } nodes.resize(totalLen); }
OP(+,,"d"(o.x)) OP(*,"mul %1\n", "r"(o.x) : "rdx") nNodes = 0; for(int qh = 0; qh != qt; ++ qh) {
H operator-(H o) { return *this + ~o.x; } strings.clear(); Node *t = que[qh];
ull get() const { return x + !~x; } roots.clear(); int a = 0;
bool operator==(H o) const { return get() == o.get(); } sizes.clear(); for(Node *n : t->next) {
bool operator<(H o) const { return get() < o.get(); } que.resize(totalLen); if(n != nullptr) {
}; } que[qt ++] = n;
static const H C = (ll)1e11+3; // (order ~ 3e9; random also ok) Node *r = t->fail;
void insert(const string &str, int sign) { while(r->next[a] == nullptr)
struct HashInterval { strings.push_back(String{ str, sign }); r = r->fail;
vector<H> ha, pw; roots.push_back(nodes.data() + nNodes); n->fail = r->next[a];
HashInterval(string& str) : ha(sz(str)+1), pw(ha) { sizes.push_back(1); n->sum += r->next[a]->sum;
pw[0] = 1; nNodes += (int)str.size() + 1; }
rep(i,0,sz(str)) auto check = [&]() { return sizes.size() > 1 && ++ a;
ha[i+1] = ha[i] * C + str[i], sizes.end()[-1] == sizes.end()[-2]; }; }
pw[i+1] = pw[i] * C; if(!check()) }
} makePMA(strings.end() - 1, strings.end(), roots.back(), }
H hashInterval(int a, int b) { // hash [a, b) que);
return ha[b] - ha[a] * pw[b - a]; while(check()) { static int matchPMA(const Node *t, const string &str) {
} int m = sizes.back(); int res = 0;
}; roots.pop_back(); for(char c : str) {
sizes.pop_back(); int a = c - AlphabetBase;
vector<H> getHashes(string& str, int length) { sizes.back() += m; while(t->next[a] == nullptr)
if (sz(str) < length) return {}; if(!check()) t = t->fail;
H h = 0, pw = 1; makePMA(strings.end() - m * 2, strings.end(), t = t->next[a];
rep(i,0,length) roots.back(), que); res += t->sum;
h = h * C + str[i], pw = pw * C; } }
vector<H> ret = {h}; } return res;
rep(i,length,sz(str)) { }
ret.push_back(h = h * C + str[i] - pw * int match(const string &str) const {
str[i-length]); int res = 0;
} for(const Node *t : roots) vector<Node> nodes;
return ret; res += matchPMA(t, str); int nNodes;
} return res; vector<String> strings;
} vector<Node*> roots;
H hashString(string& s){H h{}; for(char c:s) h=h*C+c;return h;} vector<int> sizes;
private: vector<Node*> que;
static void makePMA(vector<String>::const_iterator begin, };
vector<String>::const_iterator end, Node *nodes,
11.2 Incremental Aho Corasick vector<Node*> &que) { int main() {
int nNodes = 0; int m;
Node *root = new(&nodes[nNodes ++]) Node(); while(~scanf("%d", &m)) {
class IncrementalAhoCorasic { for(auto it = begin; it != end; ++ it) { IncrementalAhoCorasic iac;
static const int Alphabets = 26; Node *t = root; iac.init(600000);
static const int AlphabetBase = ’a’; for(char c : it->str) { rep(i, m) {
struct Node { Node *&n = t->next[c - AlphabetBase]; int ty;
Node *fail; if(n == nullptr) char s[300001];
Node *next[Alphabets]; n = new(&nodes[nNodes ++]) Node(); scanf("%d%s", &ty, s);
int sum; t = n; if(ty == 1) {
Node() : fail(NULL), next{}, sum(0) { } } iac.insert(s, +1);
PTIT.Nutriboost 26

} else if(ty == 2) { tempSA[--cnt[0]] = SA[i]; * the number of paths from the initial state to all the other
iac.insert(s, -1); SA = tempSA; * states.
} else if(ty == 3) { } *
int ans = iac.match(s); * The overall complexity is O(n)
printf("%d\n", ans); vector <int> constructSA(string s) { * can be tested here:
fflush(stdout); int n = s.length(); https://www.urionlinejudge.com.br/judge/en/problems/view/1530
} else { vector <int> SA(n); * */
abort(); vector <int> RA(n);
} vector <int> tempRA(n); struct state {
} for (int i = 0; i < n; i++) { int len, link;
} RA[i] = s[i]; long long num_paths;
return 0; SA[i] = i; map<int, int> next;
} } };
for (int step = 1; step < n; step <<= 1) {
countingSort(SA, RA, step); const int MN = 200011;
countingSort(SA, RA, 0); state sa[MN << 1];
11.3 KMP int c = 0; int sz, last;
tempRA[SA[0]] = c; long long tot_paths;
for (int i = 1; i < n; i++) {
vi pi(const string& s) { if (RA[SA[i]] == RA[SA[i - 1]] && RA[SA[i] + step] void sa_init() {
vi p(sz(s)); == RA[SA[i - 1] + step]) sz = 1;
rep(i,1,sz(s)) { tempRA[SA[i]] = tempRA[SA[i - 1]]; last = 0;
int g = p[i-1]; else sa[0].len = 0;
while (g && s[i] != s[g]) g = p[g-1]; tempRA[SA[i]] = tempRA[SA[i - 1]] + 1; sa[0].link = -1;
p[i] = g + (s[i] == s[g]); } sa[0].next.clear();
} RA = tempRA; sa[0].num_paths = 1;
return p; if (RA[SA[n - 1]] == n - 1) break; tot_paths = 0;
} } }
return SA;
vi match(const string& s, const string& pat) { } void sa_extend(int c) {
vi p = pi(pat + ’\0’ + s), res; int cur = sz++;
rep(i,sz(p)-sz(s),sz(p)) vector<int> computeLCP(const string& s, const vector<int>& SA) { sa[cur].len = sa[last].len + 1;
if (p[i] == sz(pat)) res.push_back(i - 2 * int n = SA.size(); sa[cur].next.clear();
sz(pat)); vector<int> LCP(n), PLCP(n), c(n, 0); sa[cur].num_paths = 0;
return res; for (int i = 0; i < n; i++) int p;
} c[SA[i]] = i; for (p = last; p != -1 && !sa[p].next.count(c); p =
int k = 0; sa[p].link) {
for (int j, i = 0; i < n-1; i++) { sa[p].next[c] = cur;
if(c[i] - 1 < 0) sa[cur].num_paths += sa[p].num_paths;
11.4 Suffix Array continue; tot_paths += sa[p].num_paths;
j = SA[c[i] - 1]; }
k = max(k - 1, 0);
const int MAXN = 200005; while (i+k < n && j+k < n && s[i + k] == s[j + k]) if (p == -1) {
k++; sa[cur].link = 0;
const int MAX_DIGIT = 256; PLCP[i] = k; } else {
void countingSort(vector<int>& SA, vector<int>& RA, int k = 0) { } int q = sa[p].next[c];
int n = SA.size(); for (int i = 0; i < n; i++) if (sa[p].len + 1 == sa[q].len) {
vector<int> cnt(max(MAX_DIGIT, n), 0); LCP[i] = PLCP[SA[i]]; sa[cur].link = q;
for (int i = 0; i < n; i++) return LCP; } else {
if (i + k < n) } int clone = sz++;
cnt[RA[i + k]]++; sa[clone].len = sa[p].len + 1;
else sa[clone].next = sa[q].next;
cnt[0]++; sa[clone].num_paths = 0;
for (int i = 1; i < cnt.size(); i++) 11.5 Suffix Automation sa[clone].link = sa[q].link;
cnt[i] += cnt[i - 1]; for (; p!= -1 && sa[p].next[c] == q; p = sa[p].link) {
vector<int> tempSA(n); sa[p].next[c] = clone;
for (int i = n - 1; i >= 0; i--) /* sa[q].num_paths -= sa[p].num_paths;
if (SA[i] + k < n) * Suffix automaton: sa[clone].num_paths += sa[p].num_paths;
tempSA[--cnt[RA[SA[i] + k]]] = SA[i]; * This implementation was extended to maintain (online) the }
else * number of different substrings. This is equivalent to compute sa[q].link = sa[cur].link = clone;
PTIT.Nutriboost 27

} memset(t, -1, sizeof t); r = l = 0;


} fill(t[1],t[1]+ALPHA,0); for(int i = 1; i < n; ++i){
last = cur; s[0] = 1; l[0] = l[1] = -1; r[0] = r[1] = p[0] = if(i > r) {
} p[1] = 0; l = r = i;
rep(i,0,sz(a)) ukkadd(i, toi(a[i])); while(r < n and s[r - l] == s[r])r++;
} z[i] = r - l;r--;
}else{
11.6 Suffix Tree // example: find longest common substring (uses ALPHA = int k = i-l;
28) if(z[k] < r - i +1) z[i] = z[k];
pii best; else {
struct SuffixTree { int lcs(int node, int i1, int i2, int olen) { l = i;
enum { N = 200010, ALPHA = 26 }; // N ~ 2*maxlen+10 if (l[node] <= i1 && i1 < r[node]) return 1; while(r < n and s[r - l] == s[r])r++;
int toi(char c) { return c - ’a’; } if (l[node] <= i2 && i2 < r[node]) return 2; z[i] = r - l;r--;
string a; // v = cur node, q = cur position int mask = 0, len = node ? olen + (r[node] - }
int t[N][ALPHA],l[N],r[N],p[N],s[N],v=0,q=0,m=2; l[node]) : 0; }
rep(c,0,ALPHA) if (t[node][c] != -1) }
void ukkadd(int i, int c) { suff: mask |= lcs(t[node][c], i1, i2, len); return z;
if (r[v]<=q) { if (mask == 3) }
if (t[v][c]==-1) { t[v][c]=m; l[m]=i; best = max(best, {len, r[node] - len});
p[m++]=v; v=s[v]; q=r[v]; goto return mask; int main(){
suff; } }
v=t[v][c]; q=l[v]; static pii LCS(string s, string t) { //string line;cin>>line;
} SuffixTree st(s + (char)(’z’ + 1) + t + string line = "alfalfa";
if (q==-1 || c==toi(a[q])) q++; else { (char)(’z’ + 2)); vector<int> z = compute_z(line);
l[m+1]=i; p[m+1]=m; l[m]=l[v]; r[m]=q; st.lcs(0, sz(s), sz(s) + 1 + sz(t), 0);
p[m]=p[v]; t[m][c]=m+1; t[m][toi(a[q])]=v; return st.best; for(int i = 0; i < z.size(); ++i ){
l[v]=q; p[v]=m; t[p[m]][toi(a[l[m]])]=m; } if(i)cout<<" ";
v=s[p[m]]; q=l[m]; }; cout<<z[i];
while (q<r[m]) { v=t[v][toi(a[q])]; }
q+=r[v]-l[v]; } cout<<endl;
if (q==r[m]) s[m]=v; else s[m]=m+2;
q=r[v]-(q-r[m]); m+=2; goto suff; 11.7 Z Algorithm // must print "0 0 0 4 0 0 1"
}
} return 0;
vector<int> compute_z(const string &s){ }
SuffixTree(string a) : a(a) { int n = s.size();
fill(r,r+N,sz(a)); vector<int> z(n,0);
memset(s, 0, sizeof s); int l,r;

You might also like