0% found this document useful (0 votes)
5 views

notebook

The document is a team notebook authored by Mottakin Chowdhury, containing a comprehensive table of contents that outlines various topics in dynamic programming, data structures, graph algorithms, and mathematical concepts. Each section is numbered and includes specific algorithms and techniques, such as Convex Hull, Segment Trees, and Dijkstra's algorithm. The notebook serves as a reference for problem-solving strategies and data structure implementations.

Uploaded by

Andrew
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

notebook

The document is a team notebook authored by Mottakin Chowdhury, containing a comprehensive table of contents that outlines various topics in dynamic programming, data structures, graph algorithms, and mathematical concepts. Each section is numbered and includes specific algorithms and techniques, such as Convex Hull, Segment Trees, and Dijkstra's algorithm. The notebook serves as a reference for problem-solving strategies and data structure implementations.

Uploaded by

Andrew
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 160

Team notebook

Mottakin Chowdhury
November 6, 2018

Contents 2.14 GP Hash Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26


2.15 HLD Sample Problem . . . . . . . . . . . . . . . . . . . . . . . . . 27
1 DP 3 2.16 HashMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.1 Convex Hull Line Container . . . . . . . . . . . . . . . . . . . . . . 3 2.17 Heavy Light Decomposition . . . . . . . . . . . . . . . . . . . . . . 30
1.2 Convex Hull Trick . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.18 How Many Values Less than a Given Value . . . . . . . . . . . . . 31
1.3 Digit DP Sample 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.19 Li Chao Tree Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4 Digit DP Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.20 Li Chao Tree Parabolic Sample . . . . . . . . . . . . . . . . . . . . 33
1.5 Divide and Conquer DP . . . . . . . . . . . . . . . . . . . . . . . . 6 2.21 Mo Algorithm Example . . . . . . . . . . . . . . . . . . . . . . . . 35
1.6 Dynamic Convex Hull Trick . . . . . . . . . . . . . . . . . . . . . . 7 2.22 Mo on Tree Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.7 Edit Distance Recursive . . . . . . . . . . . . . . . . . . . . . . . . 8 2.23 Order Statistics Tree . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.8 IOI Aliens by koosaga . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.24 Ordered Multiset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.9 In-out DP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.25 Persistent Segment Tree 1 . . . . . . . . . . . . . . . . . . . . . . . 39
1.10 Knuth Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.26 Persistent Segment Tree 2 . . . . . . . . . . . . . . . . . . . . . . . 40
1.11 LCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.27 Persistent Trie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.12 LIS nlogk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.28 RMQ Sparse Table . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.13 Matrix Expo Class . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.29 Range Sum Query by Lazy Propagation . . . . . . . . . . . . . . . 42
1.14 Palindrome in a String . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.30 Rope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.31 Segment Tree with Lazy Prop . . . . . . . . . . . . . . . . . . . . . 43
2 Data Structures 13 2.32 Splay Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.1 2D BIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.33 Venice Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2 2D Segment Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 A DSU Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3 Game 48
2.4 BIT Range Update Range Query . . . . . . . . . . . . . . . . . . . 16 3.1 Green Hacenbush . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.5 Best Partial Sum in a Range . . . . . . . . . . . . . . . . . . . . . 16 3.2 Green Hackenbush 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6 Binary Indexed Tree . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7 Centroid Decomposition Sample . . . . . . . . . . . . . . . . . . . 18 4 Geometry 50
2.8 Centroid Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 20 4.1 Convex Hull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.9 Counting Inversions with BIT . . . . . . . . . . . . . . . . . . . . . 22 4.2 Counting Closest Pair of Points . . . . . . . . . . . . . . . . . . . . 51
2.10 DSU on Tree Sample . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.3 Maximum Points to Enclose in a Circle of Given Radius with An-
2.11 Dynamic Segment Tree with Lazy Prop . . . . . . . . . . . . . . . 24 gular Sweep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.12 Dynamic Segment Tree . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.4 Point in Polygon Binary Search . . . . . . . . . . . . . . . . . . . . 53
2.13 Fenwick Tree 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.5 Rectangle Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

1
2

5 Graph 55 5.41 kth Shortest Path Length . . . . . . . . . . . . . . . . . . . . . . . 100


5.1 0-1 BFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 2-SAT 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6 Math 101
5.3 2-SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.1 CRT Diophantine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.4 Articulation Points and Bridges . . . . . . . . . . . . . . . . . . . . 58 6.2 Euler Phi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.5 BCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.3 FFT 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.6 Bellman Ford . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.4 FFT 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.7 Cycle in a Directed Graph . . . . . . . . . . . . . . . . . . . . . . . 61 6.5 FFT Extended . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.8 Dijkstra! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 6.6 FFT Modulo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.9 Dominator Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.7 FFT by XraY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.10 Edge Coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.8 Fast Integer Cube and Square Root . . . . . . . . . . . . . . . . . . 110
5.11 Edmonds Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.9 Fast Walsh-Hadamard Transform . . . . . . . . . . . . . . . . . . . 110
5.12 Faster Weighted Matching . . . . . . . . . . . . . . . . . . . . . . . 69 6.10 Faulhaber’s Formula (Custom Algorithm) . . . . . . . . . . . . . . 111
5.13 Global Minimum Cut . . . . . . . . . . . . . . . . . . . . . . . . . 69 6.11 Faulhaber’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.14 Hopcroft Karp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.12 Gauss Elimination Equations Mod Number Solutions . . . . . . . . 113
6.13 Gauss Jordan Elimination . . . . . . . . . . . . . . . . . . . . . . . 114
5.15 Hungarian Weighted Matching . . . . . . . . . . . . . . . . . . . . 72
6.14 Gauss Xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.16 Johnson’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.15 Gaussian 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.17 Kruskal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.16 Gaussian 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.18 LCA 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.17 Karatsuba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.19 LCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.18 Linear Diophantine . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.20 Manhattan MST . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.19 Matrix Expo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.21 Max Flow Dinic 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.20 Number Theoretic Transform . . . . . . . . . . . . . . . . . . . . . 119
5.22 Max Flow Dinic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.21 Segmented Sieve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.23 Max Flow Edmond Karp . . . . . . . . . . . . . . . . . . . . . . . . 80
6.22 Sieve (Bitmask) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.24 Max Flow Ford Fulkerson . . . . . . . . . . . . . . . . . . . . . . . 81
6.23 Sieve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.25 Max Flow Goldberg Tarjan . . . . . . . . . . . . . . . . . . . . . . 82
6.24 Simplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.26 Maximum Bipartite Matching and Min Vertex Cover . . . . . . . . 83 6.25 Sum of Kth Power . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.27 Maximum Matching in General Graphs (Randomized Algorithm) . 84
5.28 Min Cost Arborescence . . . . . . . . . . . . . . . . . . . . . . . . 85 7 Miscellaneous 126
5.29 Min Cost Max Flow 1 . . . . . . . . . . . . . . . . . . . . . . . . . 86 7.1 Bit Hacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.30 Min Cost Max Flow 2 . . . . . . . . . . . . . . . . . . . . . . . . . 88 7.2 Divide and Conquer on Queries . . . . . . . . . . . . . . . . . . . . 126
5.31 Min Cost Max Flow 3 . . . . . . . . . . . . . . . . . . . . . . . . . 89 7.3 Gilbert Curve for Mo . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.32 Min Cost Max Flow with Bellman Ford . . . . . . . . . . . . . . . 90 7.4 HakmemItem175 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.33 Minimum Path Cover in DAG . . . . . . . . . . . . . . . . . . . . . 91 7.5 Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.34 Prim MST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7.6 Integral Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.35 Push Relabel 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 7.7 Inverse Modulo 1 to N (Linear) . . . . . . . . . . . . . . . . . . . . 130
5.36 Push Relabel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 7.8 Josephus Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.37 SCC Kosaraju . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 7.9 MSB Position in O(1) . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.38 SCC Tarjan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.10 Nearest Smaller Values on Left-Right . . . . . . . . . . . . . . . . . 131
5.39 SPFA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.11 Next Small . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.40 Tree Construction with Specific Vertices . . . . . . . . . . . . . . . 99 7.12 Random Number Generation . . . . . . . . . . . . . . . . . . . . . 132
3

7.13 Russian Peasant Multiplication . . . . . . . . . . . . . . . . . . . . 132 return Q ? p < o.p : k < o.k;
7.14 Stable Marriage Problem . . . . . . . . . . . . . . . . . . . . . . . 133 }
7.15 Thomas Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 };
7.16 U128 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
struct LineContainer : multiset<Line> {
7.17 Useful Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
const ll inf = LLONG_MAX;
7.18 int128 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
ll div(ll a, ll b) { // floored division
if (b < 0) a *= -1, b *= -1;
8 Notes 137 if (a >= 0) return a / b;
return -((-a + b - 1) / b);
9 String 137 }
9.1 A KMP Application . . . . . . . . . . . . . . . . . . . . . . . . . . 137
9.2 Aho Corasick 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 // updates x->p, determines if y is unneeded
9.3 Aho Corasick Occurrence Relation . . . . . . . . . . . . . . . . . . 139 bool isect(iterator x, iterator y) {
9.4 Aho Corasick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 if (y == end()) { x->p = inf; return 0; }
9.5 Double Hash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 if (x->k == y->k) x->p = x->m > y->m ? inf : -inf;
9.6 Dynamic Aho Corasick Sample . . . . . . . . . . . . . . . . . . . . 142 else x->p = div(y->m - x->m, x->k - y->k);
9.7 Dynamic Aho Corasick . . . . . . . . . . . . . . . . . . . . . . . . . 144 return x->p >= y->p;
}
9.8 KMP 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
9.9 KMP 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 void add(ll k, ll m) {
9.10 Manacher-s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 148 auto z = insert({k, m, 0}), y = z++, x = y;
9.11 Minimum Lexicographic Rotation . . . . . . . . . . . . . . . . . . . 148 while (isect(y, z)) z = erase(z);
9.12 Palindrome Factorization . . . . . . . . . . . . . . . . . . . . . . . 149 if (x != begin() && isect(--x, y)) isect(x, y = erase(y));
9.13 Palindromic Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 while ((y = x) != begin() && (--x)->p >= y->p) isect(x,
9.14 String Split by Delimiter . . . . . . . . . . . . . . . . . . . . . . . . 151 erase(y));
9.15 Suffix Array 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 }
9.16 Suffix Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
ll query(ll x) { // gives max value
9.17 Suffix Automata 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
assert(!empty());
9.18 Suffix Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Q = 1; auto l = *lower_bound({0, 0, x}); Q = 0;
9.19 Trie 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 return l.k * x + l.m;
9.20 Trie 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 }
9.21 Z Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 };

// paths - vector of LineContainers


1 DP // a, b - LineContainers
// We want to take the pair-wise sum of the two line LineContainers
1.1 Convex Hull Line Container // and only keep the relevant ones. The sum is Minkowski Sum.

void convexsum(auto &a, auto &b)


bool Q;
{
auto it1 = a.begin(), it2 = b.begin();
struct Line {
while (it1 != a.end() && it2 != b.end())
mutable ll k, m, p; // slope, y-intercept, last optimal x
{
bool operator<(const Line& o) const {
4

universe.add((it1->k) + (it2->k), (it1->m) + (it2->m)); }


if ((it1->p) < (it2->p)) it1++;
else it2++; // Inserting line a*x+b with index idx
} // Before inserting one by one, all the lines are sorted by slope
}
void insert(int idx, int a, int b)
// We are merging all the LineContainers in paths. {
if(hull.empty())
void mergeall(int l, int r, auto &paths) {
{ hull.pb(MP(a, b));
if (l == r) return; id.pb(idx);
int mid = (l + r) / 2; }
else
mergeall(l, mid, paths); {
mergeall(mid + 1, r, paths); if(hull.back().first==a)
{
convexsum(paths[l], paths[mid + 1]); if(hull.back().second>=b)
{
for (auto it : paths[mid + 1]) paths[l].add(it.k, it.m); return;
} }
else
{
hull.pop_back();
1.2 Convex Hull Trick id.pop_back();
}
}
struct cht
while(hull.size()>=2 &&
{
useless(hull[hull.size()-2], hull.back(), MP(a,
vector<pii> hull;
b)))
vector<int> id;
{
hull.pop_back();
int cur=0;
id.pop_back();
}
cht()
hull.pb(MP(a,b));
{
id.pb(idx);
hull.clear();
}
id.clear();
}
}
// returns maximum value and the index of the line
// Might need double here
// Pointer approach: the queries are sorted non-decreasing
// Otherwise, we will need binary search
bool useless(const pii left, const pii middle, const pii right)
{
pair<ll,int> query(int x)
return
{
1LL*(middle.second-left.second)*(middle.first-right.first)
ll ret=-INF;
>=1LL*(right.second-middle.second)*(left.first-middle.first);
5

int idx=-1; x /= 10;


for(int i=cur ; i < hull.size() ; i++) }
{ return temp;
ll tmp=1LL*hull[i].first*x + hull[i].second; }
ll calc(int idx, bool low, int modVal, int sumMod, string s)
if(tmp>ret) {
{ if (idx == s.size()) return (!modVal && !sumMod);
ret=tmp; if (visited[idx][low][modVal][sumMod] == flag)
cur=i; return dp[idx][low][modVal][sumMod];
idx=id[i]; visited[idx][low][modVal][sumMod] = flag;
} int digit = low ? 9 : (s[idx] - ’0’);
else ll ret = 0;
{ for (int i = 0; i <= digit; i++)
break; {
} ret += calc(idx + 1, low || i < s[idx] - ’0’, (modVal * 10
} + i) % k, (sumMod + i) % k, s);
return {ret,idx}; }
} return dp[idx][low][modVal][sumMod] = ret;
}; }
int main()
{
// Slope decreasing, query minimum - Query point increasing. int test;
// Slope increasing, query maximum - Query point increasing. int a, b;
// Slope decreasing, query maximum - Query point decreasing. cin >> test;
// Slope increasing, query minimum - Query point decreasing. while (test--)
{
cin >> a >> b >> k;
if (k > 90)
1.3 Digit DP Sample 2 {
cout << "Case " << cases++ << ": 0" << endl;
continue;
// For each case, output the case number and the number of integers in
}
the range [A, B] which are
string A = toString(a - 1);
// divisible by K and the sum of its digits
string B = toString(b);
// is also divisible by K.
flag++;
int k, cases = 1;
ll x = calc(0, 0, 0, 0, A);
ll dp[11][2][83][83];
flag++;
int visited[11][2][83][83], flag;
ll y = calc(0, 0, 0, 0, B);
string toString(int x)
cout << "Case " << cases++ << ": " << y - x << endl;
{
}
string temp = "";
return 0;
if (x == 0) return "0";
}
while (x > 0)
{
int r = x % 10;
temp = char(r + ’0’) + temp;
6

1.4 Digit DP Sample


prnt(calc(0,0,0,0));
// Calculate how many numbers in the range from A to B that have digit d
in only the even positions and
return 0;
// no digit occurs in the even position and the number is divisible by m.
}

string A, B; int m, d;
ll dp[2002][2002][2][2];
1.5 Divide and Conquer DP
ll calc(int idx, int Mod, bool s, bool b)
{ // http://codeforces.com/blog/entry/8219
if(idx==B.size()) return Mod==0; // Divide and conquer optimization:
// Original Recurrence
if(dp[idx][Mod][s][b]!=-1) // dp[i][j] = min(dp[i-1][k] + C[k][j]) for k < j
return dp[idx][Mod][s][b]; // Sufficient condition:
// A[i][j] <= A[i][j+1]
ll ret=0; // where A[i][j] = smallest k that gives optimal answer
// How to use:
int low=s ? 0 : A[idx]-’0’; // // compute i-th row of dp from L to R. optL <= A[i][L] <= A[i][R] <=
int high=b ? 9 : B[idx]-’0’; optR
// compute(i, L, R, optL, optR)
for(int i=low; i<=high; i++) // 1. special case L == R
{ // 2. let M = (L + R) / 2. Calculate dp[i][M] and opt[i][M] using
if(idx%2 && i!=d) continue; O(optR - optL + 1)
if(idx%2==0 && i==d) continue; // 3. compute(i, L, M-1, optL, opt[i][M])
// 4. compute(i, M+1, R, opt[i][M], optR)
ret=(ret+calc(idx+1, (Mod*10+i)%m, s || i>low, b ||
i<high))%mod; // Example: http://codeforces.com/contest/321/problem/E
#include "../template.h"
// if(ret>=mod) ret-=mod;
} const int MN = 4011;
const int inf = 1000111000;
return dp[idx][Mod][s][b]=ret; int n, k;
} ll cost[MN][MN], dp[811][MN];

int main() inline ll getCost(int i, int j) {


{ return cost[j][j] - cost[j][i-1] - cost[i-1][j] + cost[i-1][i-1];
// ios_base::sync_with_stdio(0); }
// cin.tie(NULL); cout.tie(NULL);
// freopen("in.txt","r",stdin); void compute(int i, int L, int R, int optL, int optR) {
if (L > R) return ;
cin>>m>>d>>A>>B;
int mid = (L + R) >> 1, savek = optL;
ms(dp,-1); dp[i][mid] = inf;
7

FOR(k,optL,min(mid-1, optR)+1) { if (rhs.b != is_query) return m < rhs.m;


ll cur = dp[i-1][k] + getCost(k+1, mid); const Line* s = succ();
if (cur < dp[i][mid]) { if (!s) return 0;
dp[i][mid] = cur; ll x = rhs.m;
savek = k; return b - s->b < (s->m - m) * x;
} }
} };
compute(i, L, mid-1, optL, savek); struct HullDynamic : public multiset<Line> { // will maintain upper hull
compute(i, mid+1, R, savek, optR); for maximum
} bool bad(iterator y) {
auto z = next(y);
void solve() { if (y == begin()) {
cin >> n >> k; if (z == end()) return 0;
FOR(i,1,n+1) FOR(j,1,n+1) { return y->m == z->m && y->b <= z->b;
cin >> cost[i][j]; }
cost[i][j] = cost[i-1][j] + cost[i][j-1] - cost[i-1][j-1] + auto x = prev(y);
cost[i][j]; if (z == end()) return y->m == x->m && y->b <= x->b;
}
dp[0][0] = 0; // **** May need long double typecasting here
FOR(i,1,n+1) dp[1][i] = inf; return (long double)(x->b - y->b)*(z->m - y->m) >= (long
double)(y->b - z->b)*(y->m - x->m);
FOR(i,2,k+1) { }
compute(i, 1, n, 1, n); void insert_line(ll m, ll b) {
} auto y = insert({ m, b });
cout << dp[k][n] / 2 << endl; y->succ = [=] { return next(y) == end() ? 0 : &*next(y); };
} if (bad(y)) { erase(y); return; }
while (next(y) != end() && bad(next(y))) erase(next(y));
while (y != begin() && bad(prev(y))) erase(prev(y));
}
1.6 Dynamic Convex Hull Trick ll eval(ll x) {
auto l = *lower_bound((Line) { x, is_query });
return l.m * x + l.b;
// source:
}
https://github.com/niklasb/contest-algos/blob/master/convex_hull/dynamic.cpp
};
// Used in problem CS Squared Ends
// Problem: A is an array of n integers. The cost of subarray A[l...r] is
int n, k;
(A[l]-A[r])^2. Partition
ll a[10004];
// the array into K subarrays having a minimum total cost
// In case of initializing ’ans’, check if 1e18 is enough. Might need
int main()
LLONG_MAX
{
cin>>n>>k;
const ll is_query = -(1LL<<62);
FOR(i,1,n+1) cin>>a[i];
struct Line {
vector<ll> dp(n+1,1e18);
ll m, b;
dp[0]=0;
mutable function<const Line*()> succ;
FOR(i,0,k)
bool operator<(const Line& rhs) const {
8

{ int main()
HullDynamic hd; {
vector<ll> curr(n+1,1e18); ms(dp,-1);
cin>>a>>b;
FOR(j,1,n+1) prnt(editDistance(a.size(),b.size()));
{ return 0;
ll m=2*a[j]; }
ll c=-a[j]*a[j]-dp[j-1];
hd.insert_line(m,c);
ll now=-hd.eval(a[j])+a[j]*a[j];
curr[j]=now; 1.8 IOI Aliens by koosaga
}
dp=curr;
#include <bits/stdc++.h>
}
using namespace std;
prnt(dp[n]);
typedef long long lint;
typedef pair<lint, lint> pi;
return 0;
const int MAXN = 100005;
}
vector<pi> v;
pi dp[MAXN];
1.7 Edit Distance Recursive struct point{
lint first;
int dp[34][34]; lint second;
string a, b; int cnt;
};
int editDistance(int i, int j)
{ struct cht{
if (dp[i][j]!=-1) vector<point> v;
return dp[i][j]; void clear(){ v.clear(); }
if (i==0) long double cross(point a, point b){
return dp[i][j]=j; return ((long double)(b.second - a.second) / (b.first -
if (j==0) a.first));
return dp[i][j]=i; }
void add_line(int x, lint y, int z){
int cost; while(v.size() >= 2 && cross(v[v.size()-2], v.back()) >
if (a[i-1]==b[j-1]) cross(v.back(), (point){x, y, z})){
cost=0; v.pop_back();
else }
cost=1; v.push_back({x, y, z});
return }
dp[i][j]=min(editDistance(i-1,j)+1,min(editDistance(i,j-1)+1, pi query(int x){
editDistance(i-1,j-1)+cost)); int s = 0, e = v.size()-1;
} auto f = [&](int p){
return v[p].first * x + v[p].second;
9

}; // See how many groups are made with penalty 2*m+1


while(s != e){ if(trial(2 * m + 1).second <= k) e = m;
int m = (s+e)/2; else s = m+1;
if(f(m) <= f(m+1)) e = m; }
else s = m+1; return trial(s * 2).first / 2 - s * k;
} }
return pi(v[s].first * x + v[s].second, v[s].cnt);
}
}cht;
1.9 In-out DP
pi trial(lint l){
cht.clear();
// The problem was to find the distance of the farthest node
for(int i=1; i<=v.size(); i++){
// from each node. So we try to find such distance considering
cht.add_line(2 * 2 * v[i-1].first, dp[i-1].first +
// each node as a root.
2ll * v[i-1].first * v[i-1].first, dp[i-1].second);
const int N=10004;
dp[i] = cht.query(-v[i-1].second);
int n, f[N], g[N], ans[N];
dp[i].first += 2ll * v[i-1].second * v[i-1].second + l; //
vpii graph[N]; vi prefix[N], suffix[N];
l is penalty
dp[i].second++;
void clear()
if(i != v.size()){
{
lint c = max(0ll, v[i-1].second - v[i].first);
FOR(i,1,n+1) graph[i].clear(), prefix[i].clear(),
dp[i].first -= 2 * c * c;
suffix[i].clear();
}
ms(f,0); ms(g,0); ms(ans,0);
}
}
return dp[v.size()];
void goforgun(int u, int p=-1, int d=0)
}
{
if(p==-1) ans[u]=f[u];
long long take_photos(int n, int m, int k, std::vector<int> r,
std::vector<int> c) {
FOR(j,0,graph[u].size())
vector<pi> w;
{
for(int i=0; i<n; i++){
int v=graph[u][j].first;
if(r[i] > c[i]) swap(r[i], c[i]);
int w=graph[u][j].second;
w.push_back({r[i]-1, c[i]});
}
if(v==p) continue;
sort(w.begin(), w.end(), [&](const pi &a, const pi &b){
// considering that jth child is deleted
return pi(a.first, -a.second) < pi(b.first, -b.second);
g[u]=max(prefix[u][j],suffix[u][j]);
});
// if we are not in root, we also consider the case
for(auto &i : w){
// where parent of u becomes child of u when u is the root
if(v.empty() || v.back().second < i.second){
// d is the cost between the edge (p--u)
v.push_back(i);
if(p!=-1) g[u]=max(g[p]+d,g[u]);
}
// updating answer for v, here we consider the case when v
}
is root
lint s = 0, e = 2e12;
ans[v]=max(f[v],g[u]+w);
while(s != e){
goforgun(v,u,w);
lint m = (s+e)/2;
}
10

}
// Precalculate prefix-max and suffix-max values int main()
// max(prefix[u][j],suffix[u][j]) contains the maximum {
// value of f[u] if jth child was deleted while(scanf("%d", &n)!=EOF)
void goforfun(int u, int p=-1) {
{ int u, w;
FOR(j,0,graph[u].size()) FOR(i,2,n+1)
{ {
int v=graph[u][j].first; scanf("%d%d", &u, &w);
int w=graph[u][j].second;
graph[u].pb(MP(i,w));
if(v==p) continue; graph[i].pb(MP(u,w));
goforfun(v,u); }
goforfun(1);
f[u]=max(f[u],f[v]+w); goforgun(1);
} FOR(i,1,n+1) prnt(ans[i]);
int pref=0, suff=0; clear();
FOR(j,0,graph[u].size()) }
{ return 0;
int v=graph[u][j].first; }
int w=graph[u][j].second;
// important, we want to keep same size but avoid parent
if(v==p)
{ 1.10 Knuth Optimization
prefix[u].pb(0);
continue;
/*This trick works only for optimization DP over substrings for which
}
optimal
prefix[u].pb(pref);
middle point depends monotonously on the end points. Let mid[L,R] be the
pref=max(pref,f[v]+w);
first
}
middle point for (L,R) substring which gives optimal result. It can be
FORr(j,graph[u].size()-1,0)
proven
{
that mid[L,R-1] <= mid[L,R] <= mid[L+1,R] - this means monotonicity of
int v=graph[u][j].first;
mid by
int w=graph[u][j].second;
L and R. Applying this optimization reduces time complexity from O(k^3)
if(v==p)
to O(k^2)
{
because with fixed s (substring length) we have m_right(L) = mid[L+1][R]
suffix[u].pb(0);
= m_left(L+1).
continue;
That’s why nested L and M loops require not more than 2k iterations
}
overall.*/
suffix[u].pb(suff);
suff=max(suff,f[v]+w);
for (int s = 0; s <= k; s++) //s - length(size) of substring
}
for (int L = 0; L + s <= k; L++) { //L - left point
// Reversing is important
int R = L + s; //R - right point
REVERSE(suffix[u]);
if (s < 2) {
}
res[L][R] = 0; //DP base - nothing to break
11

mid[L][R] = l; //mid is equal to left border void printAll(int i, int j)


continue; {
} if (a[i] == ’\0’ || b[j] == ’\0’)
int mleft = mid[L][R - 1]; //Knuth’s trick: getting bounds {
on M prnt(l);
int mright = mid[L + 1][R]; return;
res[L][R] = 1000000000000000000LL; }
for (int M = mleft; M <= mright; M++) { //iterating for M if (a[i] == b[j])
in the bounds only {
ll tres = res[L][M] + res[M][R] + (x[R] - x[L]); l += a[i];
if (res[L][R] > tres) { //relax current solution printAll(i + 1, j + 1);
res[L][R] = tres; l.erase(l.end() - 1);
mid[L][R] = M; }
} else
} {
} if (dp[i + 1][j] > dp[i][j + 1])
ll answer = res[0][k]; printAll(i + 1, j);
else if (dp[i + 1][j] < dp[i][j + 1])
printAll(i, j + 1);
else
1.11 LCS {
printAll(i + 1, j);
printAll(i, j + 1);
string a, b;
}
int dp[100][100];
}
string l;
}
void printLcs(int i, int j)
int lcslen (int i, int j)
{
{
if (a[i] == ’\0’ || b[j] == ’\0’)
if (a[i] == ’\0’ || b[j] == ’\0’)
{
return 0;
cout << l << endl;
if (dp[i][j] != -1)
return;
return dp[i][j];
}
int ans = 0;
if (a[i] == b[j])
if (a[i] == b[j])
{
{
l += a[i];
ans = 1 + lcslen(i + 1, j + 1);
printLcs(i + 1, j + 1);
}
}
else
else
{
{
int x = lcslen(i, j + 1);
if (dp[i + 1][j] > dp[i][j + 1])
int y = lcslen(i + 1, j);
printLcs(i + 1, j);
ans = max(x, y);
else
}
printLcs(i, j + 1);
return dp[i][j] = ans;
}
}
}
12

void init(int sz)


int main() {
{ ms(mat,0);
cin >> a >> b; for(int i=0; i<sz; i++) mat[i][i]=1;
ms(dp, -1); }
cout << lcslen(0, 0) << endl; } aux;
printLcs(0, 0);
l.clear(); void matMult(Matrix &m, Matrix &m1, Matrix &m2, int sz)
printAll(0, 0); {
return 0; ms(m.mat,0);
}
// This only works for square matrix

FOR(i,0,sz)
1.12 LIS nlogk {
FOR(j,0,sz)
vector<int> d; {
int ans, n; FOR(k,0,sz)
{
int main() { m.mat[i][k]=(m.mat[i][k]+m1.mat
scanf("%d", &n); [i][j]*m2.mat[j][k])%mod;
for (int i = 0; i < n; i++) { }
int x; }
scanf("%d", &x); }
vector<int>::iterator it = lower_bound(d.begin(), d.end(), x); }
if (it == d.end()) d.push_back(x);
else *it = x; Matrix expo(Matrix &M, int n, int sz)
} {
printf("LIS = %d", d.size()); Matrix ret;
return 0; ret.init(sz);
}
if(n==0) return ret;
if(n==1) return M;

1.13 Matrix Expo Class Matrix P=M;

while(n!=0)
struct Matrix
{
{
if(n&1)
ll mat[MAX][MAX];
{
aux=ret;
Matrix(){}
matMult(ret,aux,P,sz);
}
// This initialization is important.
// Input matrix should be initialized separately
n>>=1;
13

2 Data Structures
aux=P; matMult(P,aux,aux,sz);
} 2.1 2D BIT
return ret;
} // Call with size of the grid
// Example: fenwick_tree_2d<int> Tree(n+1,m+1) for n x m grid indexed
from 1

template <class T>


struct fenwick_tree_2d
1.14 Palindrome in a String {
vector<vector<T>> x;
fenwick_tree_2d(int n, int m) : x(n, vector<T>(m)) { }
bool isPalindrome[100][100]; void add(int k1, int k2, int a) { // x[k] += a
// Find the palindromes of a string in O(n^2) for (; k1 < x.size(); k1 |= k1 + 1)
for (int k = k2; k < x[k1].size(); k |= k + 1)
int main() x[k1][k] += a;
{ }
ios_base::sync_with_stdio(0); T sum(int k1, int k2) { // return x[0] + ... + x[k]
// freopen("in.txt","r",stdin); T s = 0;
for (; k1 >= 0; k1 = (k1 & (k1 + 1)) - 1)
string s; for (int k = k2; k >= 0; k = (k & (k + 1)) - 1) s
+= x[k1][k];
cin>>s; return s;
}
int len=s.size(); };

for(int i=0; i<len; i++)


isPalindrome[i][i]=true;
2.2 2D Segment Tree
for(int k=1; k<len; k++)
{ // Given a grid a[][], we ask sum of a subrectangle and als
for(int i=0; i+k<len; i++) // update some value on a cell.
{ void build_y(int vx, int lx, int rx, int vy, int ly, int ry) {
int j=i+k; if (ly == ry) {
if (lx == rx)
isPalindrome[i][j]=(s[i]==s[j]) && t[vx][vy] = a[lx][ly];
(isPalindrome[i+1][j-1] || i+1>=j-1); else
} t[vx][vy] = t[vx*2][vy] + t[vx*2+1][vy];
} } else {
int my = (ly + ry) / 2;
return 0; build_y(vx, lx, rx, vy*2, ly, my);
} build_y(vx, lx, rx, vy*2+1, my+1, ry);
t[vx][vy] = t[vx][vy*2] + t[vx][vy*2+1];
14

} if (lx != rx) {
} int mx = (lx + rx) / 2;
void build_x(int vx, int lx, int rx) { if (x <= mx)
if (lx != rx) { update_x(vx*2, lx, mx, x, y, new_val);
int mx = (lx + rx) / 2; else
build_x(vx*2, lx, mx); update_x(vx*2+1, mx+1, rx, x, y, new_val);
build_x(vx*2+1, mx+1, rx); }
} update_y(vx, lx, rx, 1, 0, m-1, x, y, new_val);
build_y(vx, lx, rx, 1, 0, m-1); }
}
int sum_y(int vx, int vy, int tly, int try_, int ly, int ry) {
if (ly > ry)
return 0; 2.3 A DSU Problem
if (ly == tly && try_ == ry)
return t[vx][vy];
/* Problem: You are given a graph with edge-weights. Each node has some
int tmy = (tly + try_) / 2;
color.
return sum_y(vx, vy*2, tly, tmy, ly, min(ry, tmy))
Now you are also given some queries of the form (starting_node, weight).
+ sum_y(vx, vy*2+1, tmy+1, try_, max(ly, tmy+1), ry);
The
}
query means you can start from the starting_node and you can visit only
int sum_x(int vx, int tlx, int trx, int lx, int rx, int ly, int ry) {
those
if (lx > rx)
edges which have wight <= weight. You need to print the color which will
return 0;
occur
if (lx == tlx && trx == rx)
the maximum time in your journey. If two color occurs the same, output
return sum_y(vx, 1, 0, m-1, ly, ry);
the lower
int tmx = (tlx + trx) / 2;
indexed one.
return sum_x(vx*2, tlx, tmx, lx, min(rx, tmx), ly, ry)
Solution: idea is to use DSU small to large merging with binary-lifting.
+ sum_x(vx*2+1, tmx+1, trx, max(lx, tmx+1), rx, ly, ry);
*/
}
const int LOG = 17;
void update_y(int vx, int lx, int rx, int vy, int ly, int ry, int x, int
int n, m, a[MAX], parent[MAX], up[MAX][LOG], weight[MAX][LOG];
y, int new_val) {
// best (count,color_number) pair on a component after adding an edge
if (ly == ry) {
pii best[MAX];
if (lx == rx)
// (weight,color) pair which means you can get max-occurrence of ’color’
t[vx][vy] = new_val;
with ’weight’
else
vpii ans[MAX];
t[vx][vy] = t[vx*2][vy] + t[vx*2+1][vy];
// (color,cnt) pair for each component, stores all of them
} else {
set<pii> color[MAX];
int my = (ly + ry) / 2;
vector<array<int,3>> edges;
if (y <= my)
update_y(vx, lx, rx, vy*2, ly, my, x, y, new_val);
void clear()
else
{
update_y(vx, lx, rx, vy*2+1, my+1, ry, x, y, new_val);
// Initialize weights and 2^j parent of each node.
t[vx][vy] = t[vx][vy*2] + t[vx][vy*2+1];
// Initially, each node is 2^j-th parent of itself
}
FOR(i,1,n+1)
}
{
void update_x(int vx, int lx, int rx, int x, int y, int new_val) {
FOR(j,0,LOG)
15

weight[i][j] = 1e9, up[i][j]=i; swap(u,v);


} if(u!=v)
edges.clear(); {
FOR(i,1,n+1) color[i].clear(), ans[i].clear(); merge(u,v);
} parent[u] = v;
// after merging (u,v), we store best answer
int findParent(int r) // for the component v in ans[v]
{ ans[v].pb({w,-best[v].second});
if(parent[r]==r) return r; up[u][0] = v;
return parent[r]=findParent(parent[r]); weight[u][0] = w;
} }
// note that if u==v, that edge and its weight won’t
void merge(int u, int v) matter as
{ // we have already added the smaller edges and the nodes
// merge u into v are already
for(auto it: color[u]) // connected
{ }
auto curr = color[v].lower_bound({it.first,-1}); for(int i=1; i<LOG; i++)
int cnt = it.second; {
for(int j=1; j<=n; j++)
if(curr!=end(color[v]) && curr->first==it.first) {
{ // 2^ith component of j in dsu
cnt+=curr->second; up[j][i] = up[up[j][i-1]][i-1];
color[v].erase(curr); // weight that we need to consider
} weight[j][i] = weight[up[j][i-1]][i-1];
}
color[v].insert({it.first,cnt}); }
// best (cnt,color) pair for component v, -it.first for }
ensuring
// that we get the smallest index while max-ing int main()
best[v] = max(best[v],{cnt,-it.first}); {
} int test, cases = 1;
}
scanf("%d", &test);
void solve() while(test--)
{ {
FOR(i,1,n+1) parent[i] = i; scanf("%d%d", &n, &m);
for(auto it: edges) clear();
{ FOR(i,1,n+1)
auto [w,u,v] = it; {
scanf("%d", &a[i]);
u = findParent(u); // initializing each node as a single component
v = findParent(v); color[i].insert({a[i],1});
best[i] = {1,-a[i]};
if(color[u].size()>color[v].size()) ans[i].pb({0,a[i]});
16

} ll Tree[MAX+7][2];

int u, v, w; void update(int idx, ll x, bool t)


{
FOR(i,1,m+1) while(idx<=MAX)
{ {
scanf("%d%d%d", &u, &v, &w); Tree[idx][t]+=x;
edges.pb({w,u,v}); idx+=(idx&-idx);
} }
sort(begin(edges),end(edges)); }
solve();
ll query(int idx, bool t)
int last = 0, q; {
scanf("%d", &q); ll sum=0;
printf("Case #%d:\n", cases++); while(idx>0)
while(q--) {
{ sum+=Tree[idx][t];
scanf("%d%d", &u, &w); idx-=(idx&-idx);
// the problem used this xor-ing to make the solution }
online return sum;
u^=last, w^=last; }
// u will be the component we can visit
for(int i=LOG-1; i>=0; i--) // Returns sum from [0,x]
{ ll sum(int x)
if(weight[u][i]<=w) {
u=up[u][i]; return (query(x,0)*x)-query(x,1);
} }
int idx = lower_bound(ALL(ans[u]),pii{w,inf}) -
begin(ans[u]) - 1; void updateRange(int l, int r, ll val)
last = ans[u][idx].second; {
printf("%d\n", last); update(l,val,0);
} update(r+1,-val,0);
clear(); update(l,val*(l-1),1);
} update(r+1,-val*r,1);
return 0; }
}
ll rangeSum(int l, int r)
{
return sum(r)-sum(l-1);
2.4 BIT Range Update Range Query }
};
class BITrangeOperations
{
public: 2.5 Best Partial Sum in a Range
17

int mid=(start+end)>>1;
struct Node update(left,start,mid,i,val);
{ update(right,mid+1,end,i,val);
ll bestSum, bestPrefix, bestSuffix, segSum; tree[node].merge(tree[left],tree[right]);
Node() }
{ Node query(int node, int start, int end, int i, int j)
bestSum=bestPrefix=bestSuffix=segSum=-INF; {
} if(i>end || j<start)
void merge(Node &l, Node &r) return Node();
{ if(start>=i && end<=j)
segSum=l.segSum+r.segSum; {
bestPrefix=max(l.bestPrefix,r.bestPrefix+l.segSum); return tree[node];
bestSuffix=max(r.bestSuffix,r.segSum+l.bestSuffix); }
bestSum=max(max(l.bestSum,r.bestSum),l.bestSuffix+r.bestPrefix); int left=node<<1;
} int right=left+1;
}tree[150005]; int mid=(start+end)>>1;
Node l=query(left,start,mid,i,j);
void init(int node, int start, int end) Node r=query(right,mid+1,end,i,j);
{ Node n;
if(start==end) n.merge(l,r);
{ return n;
tree[node].bestSum=tree[node].segSum=a[start]; }
tree[node].bestSuffix=tree[node].bestPrefix=a[start];
return;
}
int left=node<<1; 2.6 Binary Indexed Tree
int right=left+1;
int mid=(start+end)>>1;
init(left,start,mid); ll Tree[MAX];
init(right,mid+1,end); // This is equivalent to calculating lower_bound on prefix sums array
tree[node].merge(tree[left],tree[right]); // LOGN = log(N)
} int bit_search(int v)
void update(int node, int start, int end, int i, int val) {
{ int sum = 0;
if(i<start || i>end) int pos = 0;
return;
if(start>=i && end<=i) for(int i=LOGN; i>=0; i--)
{ {
tree[node].bestSum=tree[node].segSum=val; if(pos + (1 << i) < N and sum + Tree[pos + (1 << i)] < v)
tree[node].bestSuffix=tree[node].bestPrefix=val; {
a[start]=val; sum += Tree[pos + (1 << i)];
return; pos += (1 << i);
} }
int left=node<<1; }
int right=left+1; // +1 because ’pos’ will have position of largest value less than
’v’
18

return pos + 1; // 1. 0 x < : Sum should be 0, thus, T = 0


} // 2. a x : Sum should be v*x-v*(a-1), thus, T = v*(a-1)
// 3. b < x < n : Sum should be 0, thus, T = -v*b + v*(a-1)
void update(int idx, ll x) // As, we can see, knowing T solves our problem, we can use
{ another BIT to store this additive amount from which we can
// Let, n is the number of elements and our queries are get:
// of the form query(n)-query(l-1), i.e range queries // 0 for x < a, v*(a-1) for x in [a..b], -v*b+v(a-1) for x > b.
// Then, we should never put N or MAX in place of n here.
while(idx<=n) // Now we have two BITs.
{ // To add v in range [a, b]: Update(a, v), Update(b+1, -v) in the
Tree[idx]+=x; first BIT and Update(a, v*(a-1)) and Update(b+1, -v*b) on the
idx+=(idx&-idx); second BIT.
} // To get sum in range [0, x]: you simply do Query_BIT1(x)*x -
} Query_BIT2(x);
// Now you know how to find range sum for [a, b]. Just find sum(b)
ll query(int idx) - sum(a-1) using the formula stated above.
{ return 0;
ll sum=0; }
while(idx>0)
{
sum+=Tree[idx];
idx-=(idx&-idx); 2.7 Centroid Decomposition Sample
}
return sum;
/* You are given a tree consisting of n vertices. A number is written on
}
each vertex;
the number on vertex i is equal to a[i].
int main()
Let, g(x,y) is the gcd of the numbers written on the vertices belonging
{
to the path from
// For point update range query:
x to y, inclusive. For i in 1 to 200000, count number of pairs (x,y)
// Point update: update(x,val);
(1<=x<=y) such
// Range query (a,b): query(b)-query(a-1);
that g(x,y) equals to i.
Note that 1<=x<=y does not really matter.
// For range update point query:
*/
// Range update (a,b): update(a,v); update(b+1,-v);
vi graph[MAX];
// Point query: query(x);
int n, a[MAX], sub[MAX], total, cnt[MAX], cent, upto[MAX];
ll ans[MAX];
// Let’s just consider only one update: Add v to [a, b] while the
bool done[MAX];
rest elements of the array is 0.
set<int> take[MAX];
// Now, consider sum(0, x) for all possible x, again three
situation can arise:
void dfs(int u, int p)
// 1. 0 x < a : which results in 0
{
// 2. a x b : we get v * (x - (a-1))
sub[u] = 1;
// 3. b < x < n : we get v * (b - (a-1))
total++;
// This suggests that, if we can find v*x for any index x, then we
can get the sum(0, x) by subtracting T from it, where:
for (auto v : graph[u])
19

{ {
if (v == p || done[v]) continue; calc(v, u, upto[v]);
dfs(v, u); }
sub[u] += sub[v]; }
} }
}
void clean(int u, int p, int val)
int getCentroid(int u, int p) {
{ cnt[val] = 0;
// cout<<u<<" "<<sub[u]<<endl;
for (auto v : graph[u]) for (auto v : graph[u])
{ {
if (!done[v] && v != p && sub[v] > total / 2) if (!done[v] && v != p)
return getCentroid(v, u); {
} clean(v, u, upto[v]);
}
return u; }
} }

void go(int u, int p, int val) void calcgcd(int u, int p, int val)
{ {
ans[val]++; upto[u] = val;
take[cent].insert(val);
cnt[val]++; for (auto v : graph[u])
{
for (auto v : graph[u]) if (!done[v] && v != p)
{ {
if (!done[v] && v != p) calcgcd(v, u, gcd(val, a[v]));
{ }
go(v, u, upto[v]); }
} }
}
} void solve(int u)
{
void calc(int u, int p, int val) total = 0;
{ dfs(u, -1);
for (auto it : take[cent])
{ cent = getCentroid(u, -1);
int g = gcd(val, it); calcgcd(cent, -1, a[cent]);
ans[g] += cnt[it];
} // debug("cent",cent);
done[cent] = true;
for (auto v : graph[u])
{ for (auto v : graph[cent])
if (!done[v] && v != p) {
20

if (done[v]) continue; FOR(i, 1, MAX) if (ans[i]) printf("%d %lld\n", i, ans[i]);


// cout<<"from centroid "<<cent<<" going to node:
"<<v<<endl; return 0;
calc(v, cent, upto[v]); }
go(v, cent, upto[v]);
}

for (auto v : graph[cent]) 2.8 Centroid Decomposition


{
if (!done[v])
int n, m, a, b, Table[MAX][20];
clean(v, cent, upto[v]);
set<int> Graph[MAX];
}
int Level[MAX], nodeCnt, Subgraph[MAX], Parent[MAX], Ans[MAX];
void findLevel(int u)
for (auto v : graph[cent])
{
{
itrALL(Graph[u], it)
if (!done[v])
{
solve(v);
int v = *it;
}
if (v != Table[u][0])
}
{
Table[v][0] = u;
int main()
Level[v] = Level[u] + 1;
{
findLevel(v);
// ios_base::sync_with_stdio(0);
}
// cin.tie(NULL); cout.tie(NULL);
}
// freopen("in.txt","r",stdin);
}
void Process()
int test, cases = 1;
{
Level[0] = 0;
scanf("%d", &n);
ms(Table, -1);
FOR(i, 1, n + 1)
Table[0][0] = 0;
{
findLevel(0);
scanf("%d", &a[i]);
// debug;
ans[a[i]]++;
for (int j = 1; 1 << j < n; j++)
}
{
for (int i = 0; i < n; i++)
int u, v;
{
if (Table[i][j - 1] != -1)
FOR(i, 1, n)
Table[i][j] = Table[Table[i][j - 1]][j - 1];
{
}
scanf("%d%d", &u, &v);
}
graph[u].pb(v);
// debug;
graph[v].pb(u);
}
}
int findLCA(int p, int q)
{
solve(1);
if (Level[p] < Level[q]) swap(p, q);
21

int x = 1; }
while (true) return u;
{ }
if ((1 << (x + 1)) > Level[p]) break; void Decompose(int u, int p)
x++; {
} nodeCnt = 0;
FORr(i, x, 0) findSubgraph(u, u);
{ int Cent = findCentroid(u, u);
if (Level[p] - (1 << i) >= Level[q]) if (p == -1) p = Cent;
p = Table[p][i]; Parent[Cent] = p;
} itrALL(Graph[Cent], it)
if (p == q) return p; {
FORr(i, x, 0) int v = *it;
{ Graph[v].erase(Cent);
if (Table[p][i] != -1 && Table[p][i] != Table[q][i]) Decompose(v, Cent);
{ }
p = Table[p][i]; Graph[Cent].clear();
q = Table[q][i]; }
} void update(int u)
} {
return Table[p][0]; int x = u;
} while (true)
int Dist(int a, int b) {
{ Ans[x] = min(Ans[x], Dist(x, u));
return Level[a] + Level[b] - 2 * Level[findLCA(a, b)]; if (x == Parent[x]) break;
} x = Parent[x];
void findSubgraph(int u, int parent) }
{ }
Subgraph[u] = 1; int query(int u)
nodeCnt++; {
itrALL(Graph[u], it) int x = u;
{ int ret = INF;
int v = *it; while (true)
if (v == parent) continue; {
findSubgraph(v, u); ret = min(ret, Dist(u, x) + Ans[x]);
Subgraph[u] += Subgraph[v]; if (x == Parent[x]) break;
} x = Parent[x];
} }
int findCentroid(int u, int p) return ret;
{ }
itrALL(Graph[u], it) int main()
{ {
int v = *it; // ios_base::sync_with_stdio(0);
if (v == p) continue; // cin.tie(NULL); cout.tie(NULL);
if (Subgraph[v] > nodeCnt / 2) return findCentroid(v, u); // freopen("in.txt","r",stdin);
22

// All the nodes are initially blue ll sum=0;


// Then by updating, one node is colored red while(idx>0)
// Upon query, return the closest red node of the given node {
scanf("%d%d", &n, &m); sum+=tree[idx];
FOR(i, 0, n - 1) idx-=(idx&-idx);
{ }
scanf("%d%d", &a, &b); return sum;
a--, b--; }
Graph[a].insert(b);
Graph[b].insert(a);
}
Process(); int main()
// debug; {
Decompose(0, -1); // ios_base::sync_with_stdio(0);
FOR(i, 0, n) Ans[i] = INF; // cin.tie(NULL); cout.tie(NULL); // No ’endl’
update(0); // freopen("in.txt","r",stdin);
while (m--) int test;
{ // cin>>test;
int t, x; scanf("%d", &test);
scanf("%d%d", &t, &x); while(test--)
x--; {
if (t == 1) update(x); ms(tree,0);
else printf("%d\n", query(x)); scanf("%d", &n);
} FOR(i,1,n+1)
return 0; {
} scanf("%d", &a[i]);
b[i]=a[i];
}

2.9 Counting Inversions with BIT sort(b+1,b+n+1);

// Compressing the array


ll tree[200005];
FOR(i,1,n+1)
int n, a[200005], b[200005];
{
int rank=int(lower_bound(b+1,b+1+n,a[i])-b-1);
void update(int idx, ll x)
a[i]=rank+1;
{
}
while(idx<=n)
// FOR(i,1,n+1) cout<<a[i]<<" "; cout<<endl;
{
ll ans=0;
tree[idx]+=x;
FORr(i,n,1)
idx+=(idx&-idx);
{
}
ans+=query(a[i]-1);
}
update(a[i],1);
}
int query(int idx)
// prnt(ans);
{
23

printf("%lld\n",ans); }
} else if (inNode[parent[u]][f] == mxCnt[parent[u]])
{
sum[parent[u]] += f;
return 0; }
} }

inNode[parent[v]].clear();
}
2.10 DSU on Tree Sample
void dfs(int u, int p)
{
/*
for (auto v : graph[u])
You are given a rooted tree with root in vertex 1. Each vertex is
{
coloured in some colour.
if (p == v) continue;
Let’s call colour c dominating in the subtree of vertex v if there are no
dfs(v, u);
other colours
merge(u, v);
that appear in the subtree of vertex v more times than colour c.
}
So it’s possible that two or more colours will be dominating in
the subtree of some vertex. The subtree of vertex v is the vertex v
out[u] = sum[parent[u]];
and all other vertices that contains vertex v in each path to the root.
}
For each vertex v find the sum of all dominating colours in the subtree
of vertex v.
int main()
*/
{
ios_base::sync_with_stdio(0);
int u, v, n, color[MAX], parent[MAX];
// cin.tie(NULL); cout.tie(NULL);
vi graph[MAX];
// freopen("in.txt","r",stdin);
map<int, int> inNode[MAX];
int mxCnt[MAX]; ll sum[MAX], out[MAX];
int test, cases = 1;
void merge(int u, int v)
n = getnum();
{
// ***Important: swapping parents
FOR(i, 1, n + 1)
if (inNode[parent[u]].size() < inNode[parent[v]].size())
{
swap(parent[u], parent[v]);
color[i] = getnum();
parent[i] = i;
for (auto it : inNode[parent[v]])
inNode[i][color[i]] = 1;
{
sum[i] = color[i];
int f = it.first, s = it.second;
mxCnt[i] = 1;
}
inNode[parent[u]][f] += s;
FOR(i, 1, n)
if (inNode[parent[u]][f] > mxCnt[parent[u]])
{
{
u = getnum();
mxCnt[parent[u]] = inNode[parent[u]][f];
v = getnum();
sum[parent[u]] = f;
24

graph[u].pb(v); if (curr->from != curr->to) {


graph[v].pb(u); curr->extend();
} curr->left->lazy += curr->lazy;
curr->right->lazy += curr->lazy;
dfs(1, 0); }
curr->lazy = 0;
FOR(i, 1, n + 1) printf("%lld ", out[i]); puts(""); }
if ((curr->from) > (curr->to) || (curr->from) > right ||
return 0; (curr->to) < left) return;
} if (curr->from >= left && curr->to <= right) {
curr->value += (curr->to - curr->from + 1) * value;
if (curr->from != curr->to) {
curr->extend();
2.11 Dynamic Segment Tree with Lazy Prop curr->left->lazy += value;
curr->right->lazy += value;
}
// Solves SPOJ HORRIBLE. Range addition and range sum query.
return;
struct node {
}
int from, to;
curr->extend();
long long value, lazy;
update_tree(curr->left, left, right, value);
node *left, *right;
update_tree(curr->right, left, right, value);
node() {
curr->value = curr->left->value + curr->right->value;
from = 1;
}
to = 1e5;
long long query_tree(node *curr, int left, int right) {
value = 0;
if ((curr->from) > (curr->to) || (curr->from) > right ||
lazy = 0;
(curr->to) < left) return 0;
left = NULL;
if (curr->lazy) {
right = NULL;
curr->value += (curr->to - curr->from + 1) * curr->lazy;
}
curr->extend();
void extend() {
curr->left->lazy += curr->lazy;
if (left == NULL) {
curr->right->lazy += curr->lazy;
left = new node();
curr->lazy = 0;
right = new node();
}
left->from = from;
if (curr->from >= left && curr->to <= right) return curr->value;
left->to = (from + to) >> 1;
long long q1, q2;
right->from = ((from + to) >> 1) + 1;
curr->extend();
right->to = to;
q1 = query_tree(curr->left, left, right);
}
q2 = query_tree(curr->right, left, right);
}
return q1 + q2;
};
}
node *root;
int main() {
int tests, n, queries;
int type, p, q;
void update_tree(node *curr, int left, int right, long long value) {
long long v;
if (curr->lazy) {
int i;
curr->value += (curr->to - curr->from + 1) * curr->lazy;
25

scanf("%d", &tests); Node() : sum(0), l(NULL), r(NULL) { }


while (tests--) { };
root = new node(); void add(Node *v, int l, int r, int q_l, int q_r, ll val) {
scanf("%d %d", &n, &queries); if (l > r || q_r < l || q_l > r)
for (i = 1; i <= queries; i++) { return;
scanf("%d", &type); if (q_l <= l && r <= q_r) {
if (type == 0) { v -> sum += val;
scanf("%d %d %lld", &p, &q, &v); return;
if (p > q) swap(p, q); }
update_tree(root, p, q, v); int mid = (l + r) >> 1;
} if (v -> l == NULL)
else { v -> l = new Node();
scanf("%d %d", &p, &q); if (v -> r == NULL)
if (p > q) swap(p, q); v -> r = new Node();
printf("%lld\n", query_tree(root, p, q)); add(v -> l, l, mid, q_l, q_r, val);
} add(v -> r, mid + 1, r, q_l, q_r, val);
} }
} ll get(Node *v, int l, int r, int pos) {
return 0; if (!v || l > r || pos < l || pos > r)
} return 0;
if (l == r)
return v -> sum;
int mid = (l + r) >> 1;
2.12 Dynamic Segment Tree return v -> sum + get(v -> l, l, mid, pos) + get(v -> r, mid + 1, r,
pos);
}
/*************************************************************************************
int n, m, t, x, y, val;
Implicit segment tree with addition on the interval
char c;
and getting the value of some element.
int main() {
Works on the intervals like [1..10^9].
Node *root = new Node();
O(logN) on query, O(NlogN) of memory.
Author: Bekzhan Kassenov.
scanf("%d", &n);
Based on problem 3327 from informatics.mccme.ru
for (int i = 0; i < n; i++) {
http://informatics.mccme.ru/moodle/mod/statements/view.php?chapterid=3327
scanf("%d", &x);
*************************************************************************************/
add(root, 0, n - 1, i, i, x);
#include <iostream>
}
#include <cstdio>
scanf("%d", &m);
#include <cstdlib>
for (int i = 0; i < m; i++) {
scanf("\n%c", &c);
using namespace std;
if (c == ’a’) {
scanf("%d%d%d", &x, &y, &val);
typedef long long ll;
add(root, 0, n - 1, --x, --y, val);
struct Node {
} else {
ll sum;
scanf("%d", &x);
Node *l, *r;
printf("%I64d ", get(root, 0, n - 1, --x));
26

} update(x1 - 1, y2, z1 - 1, 1), update(x2, y1 - 1, z1 - 1, 1);


} update(x1 - 1, y2, z2, -1), update(x2, y1 - 1, z2, -1);
return 0; update(x2, y2, z1 - 1, -1), update(x1 - 1, y1 - 1, z1 - 1, -1);
} }

/// Query for the value at index [i][j][k]


int query(int i, int j, int k){
2.13 Fenwick Tree 3D int res = 0;
while (i <= n){
int x = j;
#define MAX 205
while (x <= m){
/// 3D Fenwick tree, Range updates and point queries
int y = k;
struct Fenwick3D{
while (y <= r){
int n, m, r, tree[MAX][MAX][MAX];
res += tree[i][x][y];
y += (y & (-y));
Fenwick3D(){
}
}
x += (x & (-x));
}
Fenwick3D(int a, int b, int c){
i += (i & (-i));
clr(tree);
}
n = a, m = b, r = c;
return res;
}
}
};
/// Add v to the cube from lower-right [i,j,k] to upper-left [1,1,1]
void update(int i, int j, int k, int v){
if ((i < 0) || (j < 0) || (i > n) || (j > m) || (k < 0) || (k >
r)) return;
2.14 GP Hash Table
while (i){
int x = j; #include <ext/pb_ds/assoc_container.hpp>
while (x){ using namespace __gnu_pbds;
int y = k;
while (y){ // For integer
tree[i][x][y] += v; gp_hash_table<int, int> table;
y ^= (y & (-y));
} // Custom hash function approach is better
x ^= (x & (-x)); const int RANDOM =
} chrono::high_resolution_clock::now().time_since_epoch().count();
i ^= (i & (-i)); struct chash {
} int operator()(int x) const { return x ^ RANDOM; }
} };
gp_hash_table<int, int, chash> table;
/// Add v to the cube from upper-left [x1,y1,z1] to lower-right
[x2,y2,z2] const ll TIME =
void update(int x1, int y1, int z1, int x2, int y2, int z2){ chrono::high_resolution_clock::now().time_since_epoch().count();
update(x2, y2, z2, 1), update(x1 - 1, y1 - 1, z2, 1); const ll SEED = (ll)(new ll);
27

const ll RANDOM = TIME ^ SEED; };


const ll MOD = (int)1e9+7; gp_hash_table<ll,int,custom_hash> safe_gp_hash_table;
const ll MUL = (int)1e6+3; unordered_map<ll,int,custom_hash> safe_umap;
struct chash{
ll operator()(ll x) const { return std::hash<ll>{}((x ^ RANDOM) % MOD typedef gp_hash_table<int, int, hash<int>,
* MUL); } equal_to<int>, direct_mod_range_hashing<int>, linear_probe_fn<>,
}; hash_standard_resize_policy<hash_prime_size_policy,
gp_hash_table<ll, int, chash> table; hash_load_check_resize_trigger<true>, true>>
gp;
unsigned hash_f(unsigned x) { gp Tree;
x = ((x >> 16) ^ x) * 0x45d9f3b; // Now Tree can probably be used for fenwick, indices can be long long
x = ((x >> 16) ^ x) * 0x45d9f3b; // S is an offset to handle negative value
x = (x >> 16) ^ x; // If values can be >= -1e9, S=1e9+1
return x; // maxfen is the MAXN in fenwick, this case it was 2e9+2;
} // Note that it was okay to declare gp in integer as the values were
struct chash { // still in the range of int.
int operator()(ll x) const { return hash_f(x); } void add(long long p, int v) {
}; for (p += S; p < maxfen; p += p & -p)
gp_hash_table<ll, int, chash> table[N][N]; Tree[p] += v;
// so table[i][j][k] is storing an integer for corresponding k as hash }
unsigned hash_combine(unsigned a, unsigned b) { return a * 31 + b; } int sum(int p) {
int ans = 0;
// For pairs for (p += S; p; p ^= p & -p)
// The better the hash function, the less collisions ans += Tree[p];
// Note that hash function should not be costly return ans;
struct chash { }
int operator()(pii x) const { return x.first* 31 + x.second; }
};
gp_hash_table<pii, int, chash> table;
2.15 HLD Sample Problem
// Another recommended hash function by neal on CF
struct custom_hash {
// Query 1: From u to v, print the index of the minimum k numbers,
static uint64_t splitmix64(uint64_t x) {
// and these numbers are removed
// http://xorshift.di.unimi.it/splitmix64.c
// Query 2: Add a value to the subtree of node v
x += 0x9e3779b97f4a7c15;
int n, m, q;
x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9;
int parent[MAX], depth[MAX], subsize[MAX], st[MAX], at[MAX];
x = (x ^ (x >> 27)) * 0x94d049bb133111eb;
int nxt[MAX], chain[MAX], pos, cnt;
return x ^ (x >> 31);
ll in[MAX], aux[MAX];
}
int chainsz[MAX], head[MAX];
vi graph[MAX];
size_t operator()(uint64_t x) const {
vector<ll> girls[MAX];
static const uint64_t FIXED_RANDOM =
chrono::steady_clock::now().time_since_epoch().count();
class SegmentTree
return splitmix64(x + FIXED_RANDOM);
{
}
public:
28

pair<ll,int> Tree[4*MAX]; if(x>r || y<l) return {INF,inf};


ll Lazy[4*MAX]; if(x<=l && r<=y) return Tree[node];
void build(int node, int l, int r) if(l!=r) pushdown(node);
{
Lazy[node]=0; int mid=(l+r)/2;
if(l==r) return min(query(lc,l,mid,x,y),query(rc,mid+1,r,x,y));
{ }
Tree[node]={aux[l],at[l]}; } segtree;
return;
} class HLD
int mid=(l+r)/2; {
build(lc,l,mid); public:
build(rc,mid+1,r); void init(int n)
Tree[node]=min(Tree[lc],Tree[rc]); {
} for(int i=0; i<=n; i++) nxt[i]=-1, chainsz[i]=0;
void pushdown(int node) cnt=pos=1;
{ }
if(Lazy[node]) void dfs(int u, int p=-1, int d=0)
{ {
Lazy[lc]+=Lazy[node]; parent[u]=p;
Lazy[rc]+=Lazy[node]; subsize[u]=1;
Tree[lc].first+=Lazy[node]; depth[u]=d;
Tree[rc].first+=Lazy[node];
Lazy[node]=0; for(auto v: graph[u])
} {
} if(v==p) continue;
void update(int node, int l, int r, int x, int y, ll val) dfs(v,u,d+1);
{ subsize[u]+=subsize[v];
if(x>r || y<l) return;
if(x<=l && r<=y) if(nxt[u]==-1 || subsize[v]>subsize[nxt[u]])
{ nxt[u]=v;
Tree[node].first+=val; }
Lazy[node]+=val; }
return; void decompose(int u, int p=-1)
} {
chain[u]=cnt-1;
if(l!=r) pushdown(node); at[pos]=u;
st[u]=pos++; // Flatening nodes in order of heavy edges!
int mid=(l+r)/2; aux[st[u]]=in[u];
update(lc,l,mid,x,y,val); if(!chainsz[cnt-1]) head[cnt-1]=u;
update(rc,mid+1,r,x,y,val); chainsz[cnt-1]++;
Tree[node]=min(Tree[lc],Tree[rc]);
} if(nxt[u]!=-1) decompose(nxt[u],u);
pair<ll,int> query(int node, int l, int r, int x, int y) for(auto v: graph[u])
{ {
29

if(v==p || v==nxt[u]) continue; int test, cases = 1;


++cnt; scanf("%d%d%d", &n, &m, &q);
decompose(v,u); int u, v;
} FOR(i,0,n-1)
} {
pair<ll,int> query(int u, int v) scanf("%d%d", &u, &v);
{ graph[u].pb(v);
pair<ll,int> ret={INF,inf}; graph[v].pb(u);
}
while(chain[u]!=chain[v]) FOR(i,1,m+1)
{ {
if(depth[head[chain[u]]]<depth[head[chain[v]]]) scanf("%d", &pos);
swap(u,v); // Girl i is in node i, same node can have multiple girls
int start=head[chain[u]]; // Initial weight of each girl equals her index
ret=min(ret,segtree.query(1,1,n,st[start],st[u])); girls[pos].pb(i);
u=parent[start]; }
}
FOR(i,1,n+1) girls[i].pb(INF);
if(depth[u]>depth[v]) swap(u,v); FOR(i,1,n+1)
ret=min(ret,segtree.query(1,1,n,st[u],st[v])); {
return ret; REVERSE(girls[i]);
} in[i]=girls[i].back();
} hld; }
hld.init(n);
void handle(int u, int v, int k) hld.dfs(1);
{ hld.decompose(1);
vi ans; segtree.build(1,1,n);
while(k--) while(q--)
{ {
auto out=hld.query(u,v); int t, k;
if(out.first>=INF) break; scanf("%d", &t);
// The node which has current minimum weight of a girl if(t==1)
int idx=out.second; {
ll last=girls[idx].back(); scanf("%d%d%d", &u, &v, &k);
ans.pb(last); handle(u,v,k);
girls[idx].pop_back(); }
segtree.update(1,1,n,st[idx],st[idx],abs(girls[idx].back()-last)); else
} {
printf("%d ", (int)ans.size()); scanf("%d%d", &v, &k);
FOR(i,0,ans.size()) printf("%d ", ans[i]); segtree.update(1,1,n,st[v],st[v]+subsize[v]-1,k);
puts(""); }
} }
return 0;
int main() }
{
30

2.16 HashMap inline long long find(long long x){


int i = pos(x);
return (id[i] != t) ? -1 : val[i];
#include <bits/stdtr1c++.h>
}
#define clr(ar) memset(ar, 0, sizeof(ar))
inline bool contains(long long x){
#define read() freopen("lol.txt", "r", stdin)
int i = pos(x);
#define dbg(x) cout << #x << " = " << x << endl
return (id[i] == t);
#define ran(a, b) ((((rand() << 15) ^ rand()) % ((b) - (a) + 1)) + (a))
}
using namespace std;
inline void add(long long x, long long v){
int i = pos(x);
struct hashmap{
(id[i] == t) ? (val[i] += v) : (key[i] = x, val[i] = v, id[i] = t,
int t, sz, hmod;
sz++);
vector <int> id;
}
vector <long long> key, val;
inline int size(){
inline int nextPrime(int n){
return sz;
for (int i = n; ;i++){
}
for (int j = 2; ;j++){
if ((j * j) > i) return i;
hashmap(){}
if ((i % j) == 0) break;
hashmap(int m){
}
srand(time(0));
}
m = (m << 1) - ran(1, m);
return -1;
hmod = nextPrime(max(100, m));
}
sz = 0, t = 1;
void clear(){t++;}
id.resize(hmod + 0x1FF, 0);
key.resize(hmod + 0x1FF, 0), val.resize(hmod + 0x1FF, 0);
inline int pos(unsigned long long x){
}
int i = x % hmod;
};
while (id[i] == t && key[i] != x) i++;
return i;
int main(){
}
}
inline void insert(long long x, long long v){
int i = pos(x);
if (id[i] != t) sz++;
key[i] = x, val[i] = v, id[i] = t;
2.17 Heavy Light Decomposition
}
int parent[MAX], depth[MAX], subsize[MAX];
inline void erase(long long x){ int nxt[MAX], chain[MAX], st[MAX], pos, cnt;
int i = pos(x); int chainsz[MAX], head[MAX];
if (id[i] == t) key[i] = 0, val[i] = 0, id[i] = 0, sz--; vi graph[MAX];
}
class HLD
31

{ while(chain[u]!=chain[v])
public: {
void init(int n) if(depth[head[chain[u]]]<depth[head[chain[v]]])
{ swap(u,v);
for(int i=0; i<=n; i++) nxt[i]=-1, chainsz[i]=0; int start=head[chain[u]];
cnt=pos=1; segtree.update(1,1,n,st[start],st[u],add);
} u=parent[start];
void dfs(int u, int p=-1, int d=0) }
{ if(depth[u]>depth[v]) swap(u,v);
parent[u]=p; segtree.update(1,1,n,st[u],st[v],add);
subsize[u]=1; }
depth[u]=d;
int query(int u, int v)
for(auto v: graph[u]) {
{ int ret=0;
if(v==p) continue; while(chain[u]!=chain[v])
dfs(v,u,d+1); {
subsize[u]+=subsize[v]; if(depth[head[chain[u]]]<depth[head[chain[v]]])
swap(u,v);
if(nxt[u]==-1 || subsize[v]>subsize[nxt[u]]) int start=head[chain[u]];
nxt[u]=v; // query on respective ds
} ret+=bit.query(st[start],st[u]);
} u=parent[start];
void decompose(int u, int p=-1) }
{
chain[u]=cnt-1; if(depth[u]>depth[v]) swap(u,v);
// May need to update in segment tree on pos with some ret+=bit.query(st[u],st[v]);
val[u]
st[u]=pos++; return ret;
// Take the node value to corresponding position }
// val[st[u]]=(ll)c[u]*a+b; } hld;
if(!chainsz[cnt-1]) head[cnt-1]=u;
chainsz[cnt-1]++;

if(nxt[u]!=-1) decompose(nxt[u],u); 2.18 How Many Values Less than a Given Value
for(auto v: graph[u])
// How many values in a range are less than or equal to the given value?
{
// The key idea is to sort the values under a node in the segment tree
if(v==p || v==nxt[u]) continue;
and use binary search to find
++cnt;
// the required count
decompose(v,u);
// Complexity is O(nlog^2n) for building
}
// The actual problem needed the number of such values and the cumulative
}
sum of them
void update(int u, int v, ll add)
// Tree[node].All has all the values and Tree[node].Pref has the prefix
{
sums
32

// Remember: upper_bound gives the number of values less than or equal to ret.first += right.first; ret.second += right.second;
given value in a sorted range return ret;
struct info }
{
vector<ll> All, Pref;
} Tree[MAX * 4];
ll T[MAX], Prefix[MAX]; 2.19 Li Chao Tree Lines
void build(int node, int l, int r)
{
typedef int ftype;
if (l == r)
typedef complex<ftype> point;
{
#define x real
Tree[node].All.pb(T[l]);
#define y imag
Tree[node].Pref.pb(T[l]);
return;
ftype dot(point a, point b) {
}
return (conj(a) * b).x();
int mid = (l + r) / 2;
}
build(lc, l, mid);
build(rc, mid + 1, r);
ftype f(point a, ftype x) {
for (auto it : Tree[lc].All)
return dot(a, {x, 1});
Tree[node].All.pb(it);
}
for (auto it : Tree[rc].All)
Tree[node].All.pb(it);
const int maxn = 2e5;
SORT(Tree[node].All);
ll now = 0;
point line[4 * maxn];
for (auto it : Tree[node].All)
{
void add_line(point nw, int v = 1, int l = 0, int r = maxn) {
Tree[node].Pref.pb(now + it);
int m = (l + r) / 2;
now += it;
bool lef = f(nw, l) < f(line[v], l);
}
bool mid = f(nw, m) < f(line[v], m);
}
if(mid) {
pair<ll, ll> query(int node, int l, int r, int x, int y, int val)
swap(line[v], nw);
{
}
if (x > r || y < l) return MP(0LL, 0LL);
if(r - l == 1) {
if (x <= l && r <= y)
return;
{
} else if(lef != mid) {
int idx = upper_bound(Tree[node].All.begin(),
add_line(nw, 2 * v, l, m);
Tree[node].All.end(), val) - Tree[node].All.begin();
} else {
if (idx > 0) return MP(Tree[node].Pref[idx - 1], idx);
add_line(nw, 2 * v + 1, m, r);
return MP(0LL, 0LL);
}
}
}
int mid = (l + r) / 2;
pair<ll, ll> ret, left, right;
int get(int x, int v = 1, int l = 0, int r = maxn) {
left = query(lc, l, mid, x, y, val);
int m = (l + r) / 2;
right = query(rc, mid + 1, r, x, y, val);
if(r - l == 1) {
ret.first += left.first; ret.second += left.second;
return f(line[v], x);
33

} else if(x < m) {


return min(f(line[v], x), get(x, 2 * v, l, m)); int compare(ll x, ll y)
} else { {
return min(f(line[v], x), get(x, 2 * v + 1, m, r)); if(x<y) return -1;
} return x>y;
} }

void update(int node, int l, int r, int x, int y, fun fx)


{
2.20 Li Chao Tree Parabolic Sample if(x>r || y<l) return;
if(x<=l && r<=y)
{
/* Problem:
// cout<<"x-y: "<<x<<" "<<y<<endl;
Given n functions yi(x) = a0 + a1x + a2x^2 + a3x^3 and q queries. For
// cout<<"l-r: "<<l<<" "<<r<<endl;
each query, you are
given an integer t and you are required to find out yi (i i n) that
// fx - new function, Tree[node] - old function
minimizes the value of yi(t).
int mid=(l+r)/2;
Li Chao Tree works for functions that intersect only in one point.
int fl=compare(fx.eval(l),Tree[node].eval(l));
Constraints of the problem were
int fr=compare(fx.eval(r),Tree[node].eval(r));
such that there would always be at most on intersecting point between to
int fm1=compare(fx.eval(mid),Tree[node].eval(mid));
functions with x>=350. So
int fm2=compare(fx.eval(mid+1),Tree[node].eval(mid+1));
we bruteforced for x<350 and built Li Chao Tree for x>=350
*/
// New function is worse for l to r, no point of adding it.
if(fl>=0 && fr>=0) return;
const int N=1e5; // Max query points
const int offset=350; // Bruteforce for this limit
// New function is better for l to r, add it
struct fun
if(fl<=0 && fr<=0)
{
{
ll a, b, c, d;
Tree[node]=fx;
fun(){a=0, b=0, c=0, d=INF;}
return;
fun(ll a, ll b, ll c, ll d) :
}
a(a), b(b), c(c), d(d) { }
ll eval(ll x)
// New function is better for l to mid, add it. Old function
{
can still be
return a*x*x*x+b*x*x+c*x+d;
// better for right segment.
}
if(fl<=0 && fm1<=0)
} Tree[4*N+5];
{
// Sending the old function to right segment
ll aux[offset+5];
update(rc,mid+1,r,x,y,Tree[node]);
Tree[node]=fx;
void init()
return;
{
}
ms(aux,63);
FOR(i,1,4*N) Tree[i]=fun();
}
34

// New function is worse for l to mid, but this can be better return ret;
for right segment. }
if(fl>=0 && fm1>=0)
{ void calc(fun &fx)
update(rc,mid+1,r,x,y,fx); {
return; for(int i=0; i<offset; i++)
} {
aux[i]=min(aux[i],fx.eval(i));
// New function worse for mid+1 to r, but can be better for // prnt(fx.eval(i));
left segment }
if(fm2>=0 && fr>=0) }
{
update(lc,l,mid,x,y,fx); int main()
return; {
} // ios_base::sync_with_stdio(0);
// cin.tie(NULL); cout.tie(NULL);
// New function better for mid+1 to r, add it, old function // freopen("in.txt","r",stdin);
can still be better for left.
if(fm2<=0 && fr<=0) int test, cases = 1;
{
update(lc,l,mid,x,y,Tree[node]); scanf("%d", &test);
Tree[node]=fx;
return; int n;
}
} while(test--)
else if(l<r) {
{ scanf("%d", &n);
int mid=(l+r)/2;
init();
update(lc,l,mid,x,y,fx); int a, b, c, d;
update(rc,mid+1,r,x,y,fx);
} FOR(i,0,n)
} {
scanf("%d%d%d%d", &d, &c, &b, &a);
ll query(int node, int l, int r, int x) fun fx=fun(a,b,c,d);
{ calc(fx);
if(l==r) return Tree[node].eval(x); update(1,1,N,offset,N,fx);
}
int mid=(l+r)/2;
int q, x;
ll ret=Tree[node].eval(x); scanf("%d", &q);

if(x<=mid) ret=min(ret,query(lc,l,mid,x)); while(q--)


else ret=min(ret,query(rc,mid+1,r,x)); {
scanf("%d", &x);
35

inline bool comp(const info &a, const info &b)


if(x<offset) printf("%lld\n", aux[x]); {
else if(a.l/Block==b.l/Block) return a.r<b.r;
{ return a.l<b.l;
ll out=query(1,1,N,x); }
printf("%lld\n", out);
} inline void Add(int idx)
} {
} ans+=(2*cnt[a[idx]]+1)*a[idx];
cnt[a[idx]]++;
return 0;
} /* Actual meaning of the above code

ans-=cnt[a[idx]]*cnt[a[idx]]*a[idx];
cnt[a[idx]]++;
2.21 Mo Algorithm Example ans+=cnt[a[idx]]*cnt[a[idx]]*a[idx];

*/
struct info
}
{
int l, r, id;
inline void Remove(int idx)
info(){}
{
info(int l, int r, int id) : l(l), r(r), id(id){}
ans-=(2*cnt[a[idx]]-1)*a[idx];
};
cnt[a[idx]]--;
int n, t, a[2*MAX];
/* Actual meaning of the above code
info Q[2*MAX];
int Block, cnt[1000004];
ans-=cnt[a[idx]]*cnt[a[idx]]*a[idx];
ll ans=0;
cnt[a[idx]]--;
ll Ans[2*MAX];
ans+=cnt[a[idx]]*cnt[a[idx]]*a[idx];
// always constant
*/
// Farther improvement: When dividing the elements into blocks, we may
}
sort the first block in the
// ascending order of right borders, the second in descending, the third
int main()
in ascending order again, and so on.
{
// ios_base::sync_with_stdio(0);
// inline bool comp(const info &a, const info &b)
// cin.tie(NULL); cout.tie(NULL);
// {
// freopen("in.txt","r",stdin);
// if(a.l/blockSZ!=b.l/blockSZ)
// return a.l<b.l;
// Problem: For each query, find the value cnt[a[i]]*cnt[a[i]]*a[i]
// if((a.l/blockSZ)&1)
// return a.r<b.r;
scanf("%d%d", &n, &t);
// return a.r>b.r;
// }
Block=sqrt(n);
36

2.22 Mo on Tree Path


FOR(i,1,n+1) a[i]=getnum();
int aux[MAX], b[MAX], n, m, weight[MAX], u, v;
FOR(i,0,t)
vi graph[MAX];
{
int parent[MAX][17], st[MAX], en[MAX], tag = 0, dist[MAX], blocSZ;
Q[i].l=getnum();
int go[100005], lca[100005], cnt[MAX], t[MAX];
Q[i].r=getnum();
bool seen[MAX];
Q[i].id=i;
struct info
}
{
int u, v, id;
sort(Q,Q+t,comp);
bool fl;
info() {}
int Left=0, Right=-1;
info(int u, int v, int id, bool fl) : u(u), v(v), id(id), fl(fl) {
}
FOR(i,0,t)
};
{
vector<info> Q;
while(Left<Q[i].l)
// "Unordered"
{
void compress(int n, int *in, int *out)
Remove(Left);
{
Left++;
unordered_map <int, int> mp;
}
for (int i = 1; i <= n; i++) out[i] = mp.emplace(in[i],
while(Left>Q[i].l)
mp.size()).first->second;
{
}
Left--;
void dfs(int u, int p, int d)
Add(Left);
{
}
parent[u][0] = p;
while(Right<Q[i].r)
st[u] = ++tag;
{
dist[u] = d;
Right++;
for (auto v : graph[u])
Add(Right);
{
}
if (v != p) dfs(v, u, d + 1);
while(Right>Q[i].r)
}
{
en[u] = ++tag;
Remove(Right);
aux[st[u]] = u;
Right--;
aux[en[u]] = u;
}
}
void sparse()
Ans[Q[i].id]=ans;
{
}
for (int j = 1; 1 << j < n; j++)
{
FOR(i,0,t) printf("%lld\n", Ans[i]);
for (int i = 1; i <= n; i++)
{
return 0;
if (parent[i][j - 1] != -1)
}
parent[i][j] = parent[parent[i][j - 1]][j -
1];
37

} ms(parent, -1);
} scanf("%d%d", &n, &m);
} blocSZ = sqrt(n);
int query(int p, int q) FOR(i, 1, n + 1)
{ {
if (dist[p] < dist[q]) swap(p, q); scanf("%d", &weight[i]);
int x = 1; }
while (true) FOR(i, 1, n)
{ {
if ((1 << (x + 1)) > dist[p]) break; scanf("%d%d", &u, &v);
x++; graph[u].pb(v);
} graph[v].pb(u);
FORr(i, x, 0) if (dist[p] - (1 << i) >= dist[q]) p = parent[p][i]; }
if (p == q) return p; dfs(1, 0, 0);
FORr(i, x, 0) sparse();
{ compress(n, weight, t);
if (parent[p][i] != -1 && parent[p][i] != parent[q][i]) (1, 1) << endl;
{ FOR(i, 1, 2 * n + 1) b[i] = t[aux[i]];
p = parent[p][i]; FOR(i, 0, m)
q = parent[q][i]; {
} scanf("%d%d", &u, &v);
} lca[i] = query(u, v);
return parent[p][0]; if (st[u] > st[v]) swap(u, v);
} if (lca[i] == u) Q.pb(info(st[u], st[v], i, 0));
int ans = 0; else Q.pb(info(en[u], st[v], i, 1));
void doit(int idx) }
{ sort(Q.begin(), Q.end(), [](const info & a, const info & b)->bool
if (!seen[aux[idx]]) {
{ if (a.u / blocSZ == b.u / blocSZ) return a.v < b.v;
cnt[b[idx]]++; return a.u < b.u;
if (cnt[b[idx]] == 1) ans++; });
} int L = 1, R = 0;
else FOR(i, 0, Q.size())
{ {
cnt[b[idx]]--; int l = Q[i].u, r = Q[i].v, anc = lca[Q[i].id];
if (cnt[b[idx]] == 0) ans--;
} while (R < r) { R++; doit(R); }
seen[aux[idx]] ^= 1; while (R > r) { doit(R); R--; }
} while (L > l) { L--; doit(L); }
int main() while (L < l) { doit(L); L++; }
{
// Each node has some weight associated with it if (Q[i].fl)
// u v : ask for how many different integers that represent the {
weight of if (!cnt[b[st[anc]]])
// nodes there are on the path from u to v. go[Q[i].id] = ans + 1;
38

else go[Q[i].id] = ans;


} struct Treap{ /// hash = 96814
else go[Q[i].id] = ans; int len;
} const int ADD = 1000010;
FOR(i, 0, m) printf("%d\n", go[i]); const int MAXVAL = 1000000010;
return 0; tr1::unordered_map <long long, int> mp; /// Change to int if only int
} in treap
tree<long long, null_type, less<long long>, rb_tree_tag,
tree_order_statistics_node_update> T;

2.23 Order Statistics Tree Treap(){


len = 0;
#include <ext/pb_ds/assoc_container.hpp> T.clear(), mp.clear();
#include <ext/pb_ds/tree_policy.hpp> }
#include <ext/pb_ds/detail/standard_policies.hpp>
inline void clear(){
using namespace __gnu_pbds; len = 0;
using namespace __gnu_cxx; T.clear(), mp.clear();
}
// Order Statistic Tree
/* Special functions: inline void insert(long long x){
len++, x += MAXVAL;
find_by_order(k) --> returns iterator to the kth largest int c = mp[x]++;
element counting from 0 T.insert((x * ADD) + c);
order_of_key(val) --> returns the number of items in a set }
that are strictly smaller than our item
*/ inline void erase(long long x){
x += MAXVAL;
typedef tree< int, null_type, less<int>, int c = mp[x];
rb_tree_tag, tree_order_statistics_node_update> ordered_set; if (c){
c--, mp[x]--, len--;
T.erase((x * ADD) + c);
}
2.24 Ordered Multiset }

/// 1-based index, returns the K’th element in the treap, -1 if none
#include <bits/stdtr1c++.h>
exists
#include <ext/pb_ds/assoc_container.hpp>
inline long long kth(int k){
#include <ext/pb_ds/tree_policy.hpp>
if (k < 1 || k > len) return -1;
using namespace std;
auto it = T.find_by_order(--k);
using namespace __gnu_pbds;
return ((*it) / ADD) - MAXVAL;
}
/*** Needs C++11 or C++14 ***/
/// Count of value < x in treap
/// Treap supporting duplicating values in set
inline int count(long long x){
/// Maximum value of treap * ADD must fit in long long
39

x += MAXVAL; Tree[node]=0;
int c = mp[--x]; }
return (T.order_of_key((x * ADD) + c));
} int update(int node, int l, int r, int pos, int val)
{
/// Number of elements in treap int x;
inline int size(){ x=++idx;
return len;
} if(l==r)
}; {
Tree[x]=val;
int main(){ return x;
} }

L[x]=L[node]; R[x]=R[node];

2.25 Persistent Segment Tree 1 int mid=(l+r)/2;

if(pos<=mid) L[x]=update(L[x],l,mid,pos,val);
// Calculate how many distinct values are there in a given range
else R[x]=update(R[x],mid+1,r,pos,val);
// Persistent Segment Tree implementation
// Actually used in Codeforces - The Bakery
Tree[x]=Tree[L[x]]+Tree[R[x]];
int n, k, a[MAX], last[MAX], nxt[MAX];
return x;
int idx=1;
}
int Tree[64*MAX], L[64*MAX], R[64*MAX], root[2*MAX], rt[MAX];
int pos[MAX];
int query(int node, int l, int r, int x, int y)
{
void build(int node, int l, int r)
if(x>r || y<l) return 0;
{
if(x<=l && r<=y) return Tree[node];
if(l==r)
{
int mid=(l+r)/2;
Tree[node]=0;
return;
int q1=query(L[node],l,mid,x,y);
}
int q2=query(R[node],mid+1,r,x,y);
L[node]=++idx;
return q1+q2;
R[node]=++idx;
}
// cout<<node<<" "<<L[node]<<" "<<R[node]<<endl;
int getCost(int l, int mid)
{
int mid=(l+r)/2;
return query(root[rt[mid]],1,n,l,mid);
}
build(L[node],l,mid);
build(R[node],mid+1,r);
int main()
40

{ node *l, *r;


int test, cases=1; node() { l = nullptr; r = nullptr; sum = 0; }
node(int x) { sum = x; l = nullptr; r = nullptr; }
scanf("%d%d", &n, &k); };

build(1,1,n); typedef node* pnode;

root[0]=1; pnode merge(pnode l, pnode r)


int t=1; {
pnode ret = new node(0);
FOR(i,1,n+1) ret->sum = l->sum + r->sum;
{ ret->l = l;
scanf("%d", &a[i]); ret->r = r;
return ret;
int k=pos[a[i]]; }

if(!k) pnode init(int l, int r)


{ {
root[t]=update(root[t-1],1,n,i,1); if(l == r) { return (new node(0));}
t++;
} int mid = (l + r) >> 1;
else return merge(init(l, mid), init(mid + 1, r));
{ }
root[t]=update(root[t-1],1,n,k,0);
t++; pnode update(int pos, int val, int l, int r, pnode nd)
root[t]=update(root[t-1],1,n,i,1); {
t++; if(pos < l || pos > r) return nd;
} if(l == r) { return (new node(val)); }

rt[i]=t-1; int mid = (l + r) >> 1;


pos[a[i]]=i; return merge(update(pos, val, l, mid, nd->l), update(pos, val, mid
} + 1, r, nd->r));
}
return 0;
} int query(int qL, int qR, int l, int r, pnode nd)
{
if(qL <= l && r <= qR) return nd->sum;
if(qL > r || qR < l) return 0;
2.26 Persistent Segment Tree 2
int mid = (l + r) >> 1;
return query(qL, qR, l, mid, nd->l) + query(qL, qR, mid + 1, r,
const int MAXN = (1 << 20);
nd->r);
}
struct node
{
int get_kth(int k, int l, int r, pnode nd)
int sum;
41

{
if(l == r) return l; pnode last;
pnode version[N];
int mid = (l + r) >> 1;
if(nd->l->sum < k) return get_kth(k - nd->l->sum, mid + 1, r, void insert(int a, int time) {
nd->r); pnode v = version[time] = last = last->clone();
else return get_kth(k, l, mid, nd->l); for (int i = K - 1; i >= 0; --i) {
} int bit = (a >> i) & 1;
pnode &child = v->to[bit];
child = child->clone();
v = child;
2.27 Persistent Trie v->time = time;
}
}
#include <bits/stdc++.h>
int query(pnode v, int x, int l) {
using namespace std;
int ans = 0;
for (int i = K - 1; i >= 0; --i) {
// Problem: find maximum value (x^a[j]) in the range (l,r) where l<=j<=r
int bit = (x >> i) & 1;
if (v->to[bit]->go(l)) { // checking if this bit was inserted before
const int N = 1e5 + 100;
the range
const int K = 15;
ans |= 1 << i;
v = v->to[bit];
struct node_t;
} else {
typedef node_t * pnode;
v = v->to[bit ^ 1];
}
struct node_t {
}
int time;
return ans;
pnode to[2];
}
node_t() : time(0) {
to[0] = to[1] = 0;
void solve() {
}
int n, q;
bool go(int l) const {
scanf("%d %d", &n, &q);
if (!this) return false;
last = 0;
return time >= l;
for (int i = 0; i < n; ++i) {
}
int a;
pnode clone() {
scanf("%d", &a);
pnode cur = new node_t();
insert(a, i);
if (this) {
}
cur->time = time;
while (q--) {
cur->to[0] = to[0];
int x, l, r;
cur->to[1] = to[1];
scanf("%d %d %d", &x, &l, &r);
}
--l, --r;
return cur;
printf("%d\n", query(version[r], ~x, l));
}
// Trie version[r] contains the trie for [0...r] elements
};
42

} {
} tree[node] = a[l];
return;
}
if (l >= r) return;
2.28 RMQ Sparse Table int mid = (l + r) / 2;
build(node * 2, l, mid);
const int MAXN = (1 << 20); build(node * 2 + 1, mid + 1, r);
const int MAXLOG = 20; tree[node] = tree[node * 2] + tree[node * 2 + 1];
}
struct sparse_table void upd(int node, int l, int r, int v)
{ {
int dp[MAXN][MAXLOG]; lazy[node] += v;
int prec_lg2[MAXN], n; tree[node] += (r - l + 1) * x;
}
sparse_table() { memset(dp, 0, sizeof(dp)); memset(prec_lg2, 0, void pushDown(int node, int l, int r) //passing update information to the
sizeof(prec_lg2)); n = 0; } children
{
void init(vector<int> &a) int mid = (l + r) / 2;
{ upd(node * 2, l, mid, lazy[node]);
n = a.size(); upd(node * 2 + 1, mid + 1, r, lazy[node]);
for(int i = 2; i < 2 * n; i++) prec_lg2[i] = prec_lg2[i >> lazy[node] = 0;
1] + 1; }
for(int i = 0; i < n; i++) dp[i][0] = a[i]; void update(int node, int l, int r, int x, int y, int v)
for(int j = 1; (1 << j) <= n; j++) {
for(int i = 0; i < n; i++) if (x > r || y < l) return;
dp[i][j] = min(dp[i][j - 1], dp[i + (1 << (j if (x >= l && r <= y)
- 1))][j - 1]); {
} upd(node, l, r, v);
return;
int query(int l, int r) }
{ pushDown(node, l, r);
int k = prec_lg2[r - l + 1]; int mid = (l + r) / 2;
return min(dp[l][k], dp[r - (1 << k) + 1][k]); update(node * 2, l, mid, x, y, v);
} update(node * 2 + 1, mid + 1, r, x, y, v);
}; tree[node] = tree[node * 2] + tree[node * 2 + 1];
}

2.29 Range Sum Query by Lazy Propagation


2.30 Rope
int a[MAX + 7], tree[4 * MAX + 7], lazy[4 * MAX + 7];
void build(int node, int l, int r) #include <ext/rope>
{ #include <bits/stdtr1c++.h>
if (l == r)
43

#define MAX 50010 if (it == ’c’) d++;


#define clr(ar) memset(ar, 0, sizeof(ar)) }
#define read() freopen("lol.txt", "r", stdin) out[ye++] = 10;
#define dbg(x) cout << #x << " = " << x << endl }
}
using namespace std; }
using namespace __gnu_cxx;
fwrite(out, 1, ye, stdout);
rope <char> R[MAX]; return 0;
int d = 0, ye = 0, vnow = 0; }
char str[105], out[10000010];

int main(){
int n, i, j, k, v, p, c, x, flag; 2.31 Segment Tree with Lazy Prop
while (scanf("%d", &n) != EOF){
// Maximum in a range with lazy propagation.
d = 0, vnow = 0;
class SegmentTree
while (n--){
{
scanf("%d", &flag);
public:
ll Tree[4*MAX], Lazy[4*MAX];
if (flag == 1){
void pushdown(int node)
scanf("%d %s", &p, str);
{
p -= d, vnow++;
if(Lazy[node])
R[vnow] = R[vnow - 1];
{
R[vnow].insert(p, str); /// Insert string str after
Lazy[lc]+=Lazy[node];
position p
Lazy[rc]+=Lazy[node];
}
Tree[lc]+=Lazy[node];
Tree[rc]+=Lazy[node];
if (flag == 2){
Lazy[node]=0;
scanf("%d %d", &p, &c);
}
p -= d, c -= d, vnow++;
}
R[vnow] = R[vnow - 1];
R[vnow].erase(p - 1, c); /// Remove c characters starting
void build(int node, int l, int r)
at position p
{
}
Lazy[node]=0;
if(l==r)
if (flag == 3){
{
scanf("%d %d %d", &v, &p, &c); /// Print c characters
Tree[node]=in[l]; // input values
starting at position p in version v
return;
}
v -= d, p -= d, c -= d;
int mid=(l+r)/2;
rope <char> sub = R[v].substr(p - 1, c); /// Get the
build(lc,l,mid);
substring of c characters from position p in version v
build(rc,mid+1,r);
for (auto it: sub){
Tree[node]=max(Tree[lc],Tree[rc]);
out[ye++] = it;
Lazy[node]=0;
44

} Node* newNode(int v,Node* f) :Returns Pointer of a node whose


// Range update parent is f,and value v
void update(int node, int l, int r, int x, int y, ll val) Node* build(int l,int r,Node* f) : building [L,R] which parent is f
{ void rotate(Node* t,int d) : Rotation of Splay Tree
// puts("range update"); void splay(Node* t,Node* f) : Splaying , t resides just below the f
if(x>r || y<l) return; void select(int k,Node *f) : Select k th element in the tree
if(x<=l && r<=y) ,splay it to the just below f
{ Node*&get(int l, int r) : Getting The node for segment [L,R]
Tree[node]+=val; void reverse(int l,int r) : Reverse a segment
Lazy[node]+=val; void del(int p) : deletes entry a[p]
return; void split(int l,int r,Node*&s1) : Split the array and s1 stores
} the [L,R] segment
void cut(int l,int r) : Cut the segment [L,R] and insert in at the
if(l!=r) pushdown(node); end
void insert(int p,int v): Insert after p,( 0 means before the
int mid=(l+r)/2; array) an element whose value is v
update(lc,l,mid,x,y,val); void insertRange(int pos,Node *s): Insert after pos, an segment
update(rc,mid+1,r,x,y,val); denoted by s
Tree[node]=max(Tree[lc],Tree[rc]); int query(int l,int r): Output desired result for [L,R]
} void addRange(int l,int r,int v): Add v to all the element in
// Range query segment [L,R]
ll query(int node, int l, int r, int x, int y) void output(int l,int r) : Output the segment [L,R]
{ **/
if(x>r || y<l) return -INF;
if(x<=l && r<=y) return Tree[node]; /*
if(l!=r) pushdown(node); The following code answers the following queries
1 L R Output Maximum value in range [L,R]
int mid=(l+r)/2; 2 L R Reverse the array [L,R]
return max(query(lc,l,mid,x,y),query(rc,mid+1,r,x,y)); 3 L R v add v in range [L,R]
} 4 pos removes entry from pos
} segtree; 5 pos v - insert an element after position v

We assumes the initial array stored in ar[]={1,2,3,4... n}


*/

2.32 Splay Tree typedef int T;

const int N = 2e5+50; // >= Node + Query


/**
T ar[N]; // Initial Array
Splay Tree :
struct Node{
Node:
Node *ch[2],*pre; // child and parent
void addIt(int ad) : adding an integer in a range
T val; // Value stored in each node
void revIt() : reversing flag
int size; //size of the subtree rooted at this node
void upd() : push_up( gather from child)
T mx; // additional info stored to solve problems, here maximum value
void pushdown() : pass values to the child( like lazy propagation)
T sum;
Splay:
45

T add;//lazy updates cur->ch[0]=cur->ch[1]=null;


bool rev;// reverse flag cur->size=1;
Node(){size=0;val=mx=-1e9;add=0;} cur->val=v;
void addIt(T ad){ cur->mx=v;cur->sum = 0;
add+=ad; cur->add=0;
mx+=ad; cur->rev=0;
sum += size*ad; cur->pre=f;
val+=ad; return cur++;
} }
void revIt(){
rev^=1; Node* build(int l,int r,Node* f){
} if(l>r) return null;
void upd(){ int m=(l+r)>>1;
size=ch[0]->size+ch[1]->size+1; Node* t=newNode(ar[m],f);
mx=max(val,max(ch[0]->mx,ch[1]->mx)); t->ch[0]=build(l,m-1,t);
sum= ch[0]->sum + ch[1]->sum + val; t->ch[1]=build(m+1,r,t);
} t->upd();
void pushdown(); return t;
}Tnull,*null=&Tnull; }
void Node::pushdown(){
if (add!=0){ void rotate(Node* x,int c){
for (int i=0;i<2;++i) Node* y=x->pre;
if (ch[i]!=null) ch[i]->addIt(add); y->pushdown();
add = 0; x->pushdown();
}
if (rev){ y->ch[!c]=x->ch[c];
swap(ch[0],ch[1]); if (x->ch[c]!=null) x->ch[c]->pre=y;
for (int i=0;i<2;i++) x->pre=y->pre;
if (ch[i]!=null) ch[i]->revIt(); if (y->pre!=null)
rev = 0; {
} if (y->pre->ch[0]==y) y->pre->ch[0]=x;
} else y->pre->ch[1]=x;
struct Splay{ }
Node nodePool[N],*cur; // Static Memory and cur pointer x->ch[c]=y;
Node* root; // root of the splay tree y->pre=x;
Splay(){ y->upd();
cur=nodePool; if (y==root) root=x;
root=null; }
}
void splay(Node* x,Node* f){
void clear(){ x->pushdown();
cur=nodePool; while (x->pre!=f){
root=null; if (x->pre->pre==f){
} if (x->pre->ch[0]==x) rotate(x,1);
Node* newNode(T v,Node* f){ else rotate(x,0);
46

}else{ select(p+1,root);
Node *y=x->pre,*z=y->pre; root->ch[1]->ch[0] = null;
if (z->ch[0]==y){ splay(root->ch[1],null);
if (y->ch[0]==x) rotate(y,1),rotate(x,1); }
else rotate(x,0),rotate(x,1); void split(int l,int r,Node*&s1)
}else{ {
if (y->ch[1]==x) rotate(y,0),rotate(x,0); Node* tmp=get(l,r);
else rotate(x,1),rotate(x,0); root->ch[1]->ch[0]=null;
} root->ch[1]->upd();
} root->upd();
} s1=tmp;
x->upd(); }
} void cut(int l,int r)
void select(int k,Node* f){ {
int tmp; Node* tmp;
Node* x=root; split(l,r,tmp);
x->pushdown(); select(root->size-2,null);
k++; root->ch[1]->ch[0]=tmp;
for(;;){ tmp->pre=root->ch[1];
x->pushdown(); root->ch[1]->upd();
tmp=x->ch[0]->size; root->upd();
if (k==tmp+1) break; }
if (k<=tmp) x=x->ch[0];
else{ void init(int n){
k-=tmp+1; clear();
x=x->ch[1]; root=newNode(0,null);
} root->ch[1]=newNode(n+1,root);
} root->ch[1]->ch[0]=build(1,n,root->ch[1]);
splay(x,f); splay(root->ch[1]->ch[0],null);
} }

Node*&get(int l, int r){


select(l-1,null); void insertPos(int pos,T v)
select(r+1,root); {
return root->ch[1]->ch[0]; select(pos,null);
} select(pos+1,root);
root->ch[1]->ch[0] = newNode(v,root->ch[1]);
void reverse(int l,int r){ splay(root->ch[1]->ch[0],null);
Node* o=get(l,r); }
o->rev^=1; void insertRange(int pos,Node *s)
splay(o,null); {
} select(pos,null);
void del(int p) select(pos+1,root);
{ root->ch[1]->ch[0] = s;
select(p-1,null); s->pre = root->ch[1];
47

root->ch[1]->upd();
root->upd(); return 0;
} }
T query(int l,int r)
{
Node *o = get(l,r);
return o->mx; 2.33 Venice Technique
}
/*
void addRange(int l,int r,T v)
We want a data structure capable of doing three main update-operations
{
and some
Node *o = get(l,r);
sort of query. The three modify operations are: add: Add an element to
o->add += v;
the set.
o->val += v;
remove: Remove an element from the set. updateAll: This one normally
o->sum += o->size * v;
changes in
splay(o,null);
this case subtract X from ALL the elements. For this technique it is
completely
}
required that the update is done to ALL the values in the set equally.
void output(int l,int r){
And also for this problem in particular we may need one query:
for (int i=l;i<=r;i++){
getMin: Give me the smallest number in the set.
select(i,null);
*/
cout<<root->val<<endl;
// Interface of the Data Structure
};
struct VeniceSet {
}
void add(int);
}St;
void remove(int);
void updateAll(int);
int getMin(); // custom for this problem
int size();
int main()
};
{
int n,m,a,b,c;
/*
Imagine you have an empty land and the government can make queries of the
scanf("%d%d", &n, &m);
following
type: * Make a building with A floors. * Remove a building with B floors.
for(int i= 1;i <= n;i ++ ) ar[i] = i;
* Remove
St.init(n);
C floors from all the buildings. (A lot of buildings can be vanished) *
Which is the
FOR(i,1,m+1)
smallest standing building. (Obviously buildings which are already
{
banished don’t count)
scanf("%d%d", &a, &b);
The operations 1,2 and 4 seems very easy with a set, but the 3 is very
St.cut(a,b);
cost effective
}
probably O(N) so you might need a lot of workers. But what if instead of
removing C
St.output(1,n);
floors we just fill the streets with enough water (as in venice) to cover
up the
48

first C floors of all the buildings :O. Well that seems like cheating but // get the negative number which we really did not
at least subtracted T[i]
those floor are now vanished :). So in order to do that we apart from the int toLow = mySet.getMin();
SET
we can maintain a global variable which is the water level. so in fact if // remove from total the amount we over counted
we total -= abs(toLow);
have an element and want to know the number of floors it has we can just
do // remove it from the set since I will never be able to
height - water_level and in fact after water level is for example 80, if substract from it again
we mySet.remove(toLow);
want to make a building of 3 floors we must make it of 83 floors so that }
it cout << total << endl;
can touch the land. }
*/ cout << endl;
struct VeniceSet {
multiset<int> S;
int water_level = 0;
void add(int v) {
S.insert(v + water_level); 3 Game
}
void remove(int v) { 3.1 Green Hacenbush
S.erase(S.find(v + water_level));
}
// Green Hackenbush
void updateAll(int v) {
vi graph[505];
water_level += v;
int go(int u, int p)
}
{
int getMin() {
int ret = 0;
return *S.begin() - water_level;
for (auto &v : graph[u])
}
{
int size() {
if (v == p) continue;
return S.size();
ret ^= (go(v, u) + 1);
}
}
};
return ret;
VeniceSet mySet;
}
for (int i = 0; i < N; ++i) {
int u, v, n;
mySet.add(V[i]);
int main()
mySet.updateAll(T[i]); // decrease all by T[i]
{
int total = T[i] * mySet.size(); // we subtracted T[i] from all
// ios_base::sync_with_stdio(0);
elements
// cin.tie(NULL); cout.tie(NULL);
// freopen("in.txt","r",stdin);
// in fact some elements were already less than T[i]. So we
int test, cases = 1;
probbaly are counting
cin >> test;
// more than what we really subtracted. So we look for all those
while (test--)
elements
{
while (mySet.getMin() < 0) {
cin >> n;
49

FOR(i, 0, n - 1) // compute the Grundy number.


{ //
cin >> u >> v; // Complexity:
graph[u].pb(v); // O(m + n).
graph[v].pb(u); //
} // Verified:
if (go(1, 0)) puts("Alice"); // SPOJ 1477: Play with a Tree
else puts("Bob"); // IPSC 2003 G: Got Root?
FOR(i, 1, n + 1) graph[i].clear(); //
} //
return 0; #include <iostream>
} #include <vector>
#include <cstdio>
#include <algorithm>
#include <functional>
3.2 Green Hackenbush 2
using namespace std;
//
#define fst first
// Green Hackenbush
#define snd second
//
#define all(c) ((c).begin()), ((c).end())
// Description:
#define TEST(s) if (!(s)) { cout << __LINE__ << " " << #s << endl;
// Consider a two player game on a graph with a specified vertex (root).
exit(-1); }
// In each turn, a player eliminates one edge.
// Then, if a subgraph that is disconnected from the root, it is
struct hackenbush {
removed.
int n;
// If a player cannot select an edge (i.e., the graph is singleton),
vector<vector<int>> adj;
// he will lose.
//
hackenbush(int n) : n(n), adj(n) { }
// Compute the Grundy number of the given graph.
void add_edge(int u, int v) {
//
adj[u].push_back(v);
// Algorithm:
if (u != v) adj[v].push_back(u);
// We use two principles:
}
// 1. Colon Principle: Grundy number of a tree is the xor of
// Grundy number of child subtrees.
// r is the only root connecting to the ground
// (Proof: easy).
int grundy(int r) {
//
vector<int> num(n), low(n);
// 2. Fusion Principle: Consider a pair of adjacent vertices u, v
int t = 0;
// that has another path (i.e., they are in a cycle). Then,
function<int(int, int)> dfs = [&](int p, int u) {
// we can contract u and v without changing Grundy number.
num[u] = low[u] = ++t;
// (Proof: difficult)
int ans = 0;
//
for (int v : adj[u]) {
// We first decompose graph into two-edge connected components.
if (v == p) { p += 2 * n; continue; }
// Then, by contracting each components by using Fusion Principle,
if (num[v] == 0) {
// we obtain a tree (and many self loops) that has the same Grundy
int res = dfs(u, v);
// number to the original graph. By using Colon Principle, we can
50

low[u] = min(low[u], low[v]); 4 Geometry


if (low[v] > num[u]) ans ^= (1 + res)
^ 1; // bridge
else ans ^= res; 4.1 Convex Hull
// non bridge
} else low[u] = min(low[u], num[v]);
struct PT
}
{
if (p > n) p -= 2 * n;
int x, y;
for (int v : adj[u])
PT(){}
if (v != p && num[u] <= num[v]) ans ^= 1;
PT(int x, int y) : x(x), y(y) {}
return ans;
bool operator < (const PT &P) const
};
{
return dfs(-1, r);
return x<P.x || (x==P.x && y<P.y);
}
}
};
};
int main() {
int cases; scanf("%d", &cases);
for (int icase = 0; icase < cases; ++icase) {
ll cross(const PT p, const PT q, const PT r)
int n; scanf("%d", &n);
{
vector<int> ground(n);
return (ll)(q.x-p.x)*(ll)(r.y-p.y)-(ll)(q.y-p.y)*(ll)(r.x-p.x);
int r;
}
for (int i = 0; i < n; ++i) {
scanf("%d", &ground[i]);
vector<PT> Points, Hull;
if (ground[i] == 1) r = i;
}
void findConvexHull()
int ans = 0;
{
hackenbush g(n);
int n=Points.size(), k=0;
for (int i = 0; i < n - 1; ++i) {
int u, v;
SORT(Points);
scanf("%d %d", &u, &v);
--u; --v;
// Build lower hull
if (ground[u]) u = r;
if (ground[v]) v = r;
FOR(i,0,n)
if (u == v) ans ^= 1;
{
else g.add_edge(u, v);
while(Hull.size()>=2 &&
}
cross(Hull[Hull.size()-2],Hull.back(),Points[i])<=0)
int res = ans ^ g.grundy(r);
{
printf("%d\n", res != 0);
Hull.pop_back();
}
k--;
}
}
Hull.pb(Points[i]);
k++;
}
51

// Build upper hull typedef set<Points, bool(*)(const Points&, const Points&)> setType;
typedef setType::iterator setIT;
for(int i=n-2, t=k+1; i>=0; i--) setType s(&comp2);
{ double euclideanDistance(const Points &a, const Points &b)
while(Hull.size()>=t && {
cross(Hull[Hull.size()-2],Hull.back(),Points[i])<=0) // prnt((double)(a.x-b.x)*(a.x-b.x)+(a.y-b.y)*(a.y-b.y));
{ return (a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y);
Hull.pop_back(); }
k--; map<double, map<double, int> > CNT;
} int main()
Hull.pb(Points[i]); {
k++; // ios_base::sync_with_stdio(0);
} // cin.tie(NULL); cout.tie(NULL);
// freopen("in.txt","r",stdin);
Hull.resize(k); while ((cin >> n) && n)
} {
FOR(i, 0, n) cin >> P[i].x >> P[i].y;
sort(P, P + n, comp1);
FOR(i, 0, n)
4.2 Counting Closest Pair of Points {
// printPoint(P[i]);
s.insert(P[i]);
int n;
CNT[P[i].x][P[i].y]++;
struct Points
}
{
// To check repeated points :/
double x, y;
// for(auto it: s) printPoint(it);
Points() {}
double ans = 10000;
Points(double x, double y) : x(x), y(y) { }
int idx = 0;
bool operator<(const Points &a) const
FOR(j, 0, n)
{
{
return x < a.x;
// cout<<"Point now: "; printPoint(P[j]);
}
if (CNT[P[j].x][P[j].y] > 1) ans = 0;
};
Points it = P[j];
bool comp1(const Points &a, const Points &b)
while (it.x - P[idx].x > ans)
{
{
return a.x < b.x;
s.erase(P[idx]);
}
idx++;
bool comp2(const Points &a, const Points &b)
}
{
Points low = Points(it.x, it.y - ans);
return a.y < b.y;
Points high = Points(it.x, it.y + ans);
}
setIT lowest = s.lower_bound(low);
void printPoint(Points a)
if (lowest != s.end())
{
{
cout << a.x << " " << a.y << endl;
setIT highest = s.upper_bound(high);
}
Points P[10005];
52

for (setIT now = lowest; now != highest; PT operator + (const PT &p) const { return PT(x+p.x, y+p.y); }
now++) PT operator - (const PT &p) const { return PT(x-p.x, y-p.y); }
{ PT operator * (double c) const { return PT(x*c, y*c ); }
double cur = sqrt(euclideanDistance PT operator / (double c) const { return PT(x/c, y/c ); }
(*now, it)); };
// prnt(cur);
if (cur == 0) continue; PT p[505];
// cout<<"Here:"<<endl; double dist[505][505];
// printPoint(*now); printPoint(it); prnt int n, m;
(cur);
if (cur < ans) void calcDist()
{ {
ans = cur; FOR(i,0,n)
} {
} FOR(j,i+1,n)
} dist[i][j]=dist[j][i]=sqrt((p[i].x-p[j].x)*(p[i].x-p[j].x)
s.insert(it); +(p[i].y-p[j].y)*(p[i].y-p[j].y));
} }
// cout<<"Set now:"<<endl; }
// for(auto I: s) printPoint(I);
if (ans < 10000) cout << setprecision(4) << fixed << ans // Returns maximum number of points enclosed by a circle of radius
<< endl; ’radius’
else prnt("INFINITY"); // where the circle is pivoted on point ’point’
s.clear(); // ’point’ is on the circumfurence of the circle
CNT.clear();
} int intelInside(int point, double radius)
return 0; {
} vector<pdb> ranges;

FOR(j,0,n)
{
4.3 Maximum Points to Enclose in a Circle of Given Radius if(j==point || dist[j][point]>2*radius) continue;
with Angular Sweep
double a1=atan2(p[point].y-p[j].y,p[point].x-p[j].x);
double a2=acos(dist[point][j]/(2*radius));
typedef pair<double,bool> pdb;
ranges.pb({a1-a2,START});
#define START 0
ranges.pb({a1+a2,END});
#define END 1
}
struct PT
sort(ALL(ranges));
{
double x, y;
int cnt=1, ret=cnt;
PT() {}
PT(double x, double y) : x(x), y(y) {}
for(auto it: ranges)
PT(const PT &p) : x(p.x), y(p.y) {}
53

{ The polygon must be such that every point on the circumference is visible
if(it.second) cnt--; from the first point in the vector.
else cnt++; It returns 0 for points outside, 1 for points on the circumference, and 2
ret=max(ret,cnt); for points inside.
} */

return ret; int insideHull2(const vector<PT> &H, int L, int R, const PT &p) {
} int len = R - L;
if (len == 2) {
// returns maximum amount of points enclosed by the circle of radius r int sa = sideOf(H[0], H[L], p);
// Complexity: O(n^2*log(n)) int sb = sideOf(H[L], H[L+1], p);
int sc = sideOf(H[L+1], H[0], p);
int go(double r) if (sa < 0 || sb < 0 || sc < 0) return 0;
{ if (sb==0 || (sa==0 && L == 1) || (sc == 0 && R ==
int cnt=0; (int)H.size()))
return 1;
FOR(i,0,n) return 2;
{ }
cnt=max(cnt,intelInside(i,r)); int mid = L + len / 2;
} if (sideOf(H[0], H[mid], p) >= 0)
return insideHull2(H, mid, R, p);
return cnt; return insideHull2(H, L, mid+1, p);
} }

int insideHull(const vector<PT> &hull, const PT &p) {


if ((int)hull.size() < 3) return onSegment(hull[0], hull.back(),
p);
4.4 Point in Polygon Binary Search else return insideHull2(hull, 1, (int)hull.size(), p);
}
int sideOf(const PT &s, const PT &e, const PT &p)
{
ll a = cross(e-s,p-s);
return (a > 0) - (a < 0);
4.5 Rectangle Union
}
struct info
bool onSegment(const PT &s, const PT &e, const PT &p) {
{ int x, ymin, ymax, type;
PT ds = p-s, de = p-e; info(){}
return cross(ds,de) == 0 && dot(ds,de) <= 0; info(int x, int ymin, int ymax, int type) :
} x(x), ymin(ymin), ymax(ymax), type(type) { }

/* bool operator < (const info &p) const


Main routine {
Description: Determine whether a point t lies inside a given polygon return x<p.x;
(counter-clockwise order). }
54

}; m=take.size()-1;

vector<info> in; // VecPrnt(take);


int n, x, y, p, q, m;
vi take; update(1,1,m,in[0].ymin,in[0].ymax,in[0].type);
int Lazy[4*MAX], Tree[4*MAX];
int prv=in[0].x; ll ret=0;
void update(int node, int l, int r, int ymin, int ymax, int val)
{ FOR(i,1,in.size())
if(take[l]>ymax || take[r]<ymin) return; {
ret+=(ll)(in[i].x-prv)*Tree[1];
if(ymin<=take[l] && take[r]<=ymax) prv=in[i].x;
{ update(1,1,m,in[i].ymin,in[i].ymax,in[i].type);
Lazy[node]+=val; }

if(Lazy[node]) Tree[node]=take[r]-take[l]; return ret;


else Tree[node]=Tree[lc]+Tree[rc]; }

return; int main()


} {
// ios_base::sync_with_stdio(0);
if(l+1>=r) return; // cin.tie(NULL); cout.tie(NULL);
// freopen("in.txt","r",stdin);
int mid=(l+r)/2;
int test, cases=1;
update(lc,l,mid,ymin,ymax,val);
update(rc,mid,r,ymin,ymax,val); scanf("%d", &test);

if(Lazy[node]) Tree[node]=take[r]-take[l]; while(test--)


else Tree[node]=Tree[lc]+Tree[rc]; {
} scanf("%d", &n);

ll solve() in.clear();
{
take.clear(); ms(Tree,0); ms(Lazy,0); FOR(i,0,n)
take.pb(-1); {
scanf("%d%d%d%d", &x, &y, &p, &q);
FOR(i,0,in.size())
{ in.pb(info(x,y,q,1));
take.pb(in[i].ymin); in.pb(info(p,y,q,-1));
take.pb(in[i].ymax); }
}
SORT(in);
SORT(take);
take.erase(unique(ALL(take)),take.end()); ll ans=solve();
55

adj[a].push_back(b);
printf("Case %d: %lld\n", cases++, ans); rev[b].push_back(a);
} }

return 0; inline void add_or(int a, int b){


} add_implication(-a, b);
add_implication(-b, a);
}

5 Graph inline void add_xor(int a, int b){


add_or(a, b);
add_or(-a, -b);
5.1 0-1 BFS }

// Useful when the graph only has weights 0 or 1. inline void add_and(int a, int b){
// Complexity becomes O(V+E) add_or(a, b);
add_or(a, -b);
for all v in vertices: add_or(-a, b);
dist[v] = inf }
dist[source] = 0;
deque d /// force variable x to be true (if x is negative, force !x to be
d.push_front(source) true)
while d.empty() == false: inline void force_true(int x){
vertex = get front element and pop_front if (x < 0) x = n - x;
// Go to all edges add_implication(neg(x), x);
for all edges e of form (vertex , u): }
// consider relaxing with 0 or 1 weight edges
if travelling e relaxes distance to u: /// force variable x to be false (if x is negative, force !x to be
relax dist[u] false)
if e.weight = 1: inline void force_false(int x){
d.push_back(u) if (x < 0) x = n - x;
else: add_implication(x, neg(x));
d.push_front(u) }
*/

struct tSAT{
5.2 2-SAT 2 int n, id[MAX][2];
vi G[MAX];
int ord, dis[MAX], low[MAX], sid[MAX], scc;
// for x or y add !x -> y, !y -> x
stack <int> s;
// x and y = (!x or y) and (x or !y) and (!x or !
/*
tSAT(int n): n(n){
inline void add_implication(int a, int b){
int now=0;
if (a < 0) a = n - a;
f(i,1,n+1){
if (b < 0) b = n - b;
f(j,0,2){
56

id[i][j]=++now; ++scc;
} while(1){
} int t=s.top();
} s.pop();
sid[t]=scc;
tSAT() {} if (t==u) break;
}
void add_edge(int u,int tu,int v,int tv){ }
G[id[u][tu]].pb(id[v][tv]); }
} }

bool feasible(){
scc=0; ord=0;
mem(dis,0); mem(sid,0); 5.3 2-SAT
f(i,1,2*n+1){
if(!dis[i]) tarjan(i);
const int N=20004;
}
int n, m, root=-1, leader[N], truths[N];
f(i,1,n+1){
vi graph[N], rev[N], order;
if(sid[id[i][0]]==sid[id[i][1]]) return false;
bool visited[N];
}
// clear() if necessary
return true;
void dfs_reverse(int u)
}
{
visited[u]=true;
vi solution(){
FOR(j,0,rev[u].size())
vi ans;
{
f(i,1,n+1){
int v=rev[u][j];
if(sid[id[i][0]]>sid[id[i][1]]) ans.pb(i);
if(!visited[v])
}
dfs_reverse(v);
return ans;
}
}
order.pb(u);
}
void tarjan(int u){
void dfs(int u)
s.push(u);
{
dis[u]=low[u]=++ord;
visited[u]=true;
f(i,0,G[u].size()){
leader[u]=root;
int v=G[u][i];
if (!dis[v]){
FOR(j,0,graph[u].size())
tarjan(v);
{
low[u]=min(low[v],low[u]);
int v=graph[u][j];
}
else if (!sid[v]){
if(!visited[v])
low[u]=min(dis[v],low[u]);
dfs(v);
}
}
}
}
if (low[u]==dis[u]){
void solve()
57

{ if(truths[leader[u]]==-1)
for(int i=2*m; i>=1; i--) {
{ truths[leader[u]]=true;
if(!visited[i]) truths[leader[u+m]]=false;
{ }
dfs_reverse(i); }
} }
} return true;
// important }
REVERSE(order); int main()
ms(visited,false); {
int test, cases = 1;
FOR(i,0,order.size()) scanf("%d", &test);
{ while(test--)
if(!visited[order[i]]) {
{ scanf("%d%d", &n, &m);
root=order[i];
dfs(order[i]); int u, v;
} ms(truths,-1);
}
} FOR(i,1,n+1)
{
bool sameSCC(int u, int v) scanf("%d%d", &u, &v);
{ // For each clause (u or v), we add to edges - (~u to v), (~v
return leader[u]==leader[v]; to u)
} if(u > 0)
bool assign() {
{ if(v > 0)
FOR(i,0,order.size()) {
{ graph[m+u].push_back(v); graph[m+v].push_back(u);
int u=order[i]; rev[v].push_back(m+u); rev[u].push_back(m+v);
} else {
if(u>m) graph[m+u].push_back(m-v); graph[-v].push_back(u);
{ rev[m-v].push_back(m+u); rev[u].push_back(-v);
if(sameSCC(u,u-m)) return false; }
if(truths[leader[u]]==-1) } else {
{ if(v > 0) {
truths[leader[u]]=true; graph[-u].push_back(v); graph[m+v].push_back(m-u);
truths[leader[u-m]]=false; rev[v].push_back(-u); rev[m-u].push_back(m+v);
} } else {
} graph[-u].push_back(m-v); graph[-v].push_back(m-u);
else rev[m-v].push_back(-u); rev[m-u].push_back(-v);
{ }
if(sameSCC(u,m+u)) return false; }
}
58

if (u==dfsroot)
solve(); rootchild++;
bool okay=assign(); articulate(v);
if (dfs_low[v]>=dfs_num[u])
if(okay) art_v[u]=true;
{ if (dfs_low[v]>dfs_num[u])
printf("Case %d: Yes\n", cases++); cout<<"Edge "<<u<<" & "<<v<<" is a
bridge."<<endl;
vi allow; dfs_low[u]=min(dfs_low[u],dfs_low[v]);
}
FOR(i,1,m+1) else if (v!=parent[u])
{ dfs_low[u]=min(dfs_low[u],dfs_num[v]);
if(truths[leader[i]]) }
{ }
allow.pb(i);
} int main()
} {
int n, m, u, v;
printf("%d", (int)allow.size()); cin>>n>>m;
FOR(i,0,allow.size()) cout<<" "<<allow[i]; for (int i=0; i<m; i++)
cout<<endl; {
} cin>>u>>v;
else printf("Case %d: No\n", cases++); graph[u].pb(v);
} graph[v].pb(u);
return 0; }
} cnt=0;
ms(dfs_num,-1);
for (int i=0; i<n; i++)
{
5.4 Articulation Points and Bridges if (dfs_num[i]==-1)
{
dfsroot=i;
vi graph[100];
rootchild=0;
int dfs_num[100], dfs_low[100], parent[100], cnt;
articulate(i);
int dfsroot, rootchild;
art_v[dfsroot]=(rootchild>1);
int art_v[100];
}
}
void articulate(int u)
prnt("Articulation points:");
{
for (int i=0; i<n; i++)
dfs_low[u]=dfs_num[u]=cnt++;
{
for (ul j=0; j<graph[u].size(); j++)
if (art_v[i])
{
cout<<"Vertex: "<<i<<endl;
int v=graph[u][j];
}
if (dfs_num[v]==-1)
return 0;
{
}
parent[v]=u;
59

5.5 BCC
void dfs(const ll& node, const ll& par) {
dfs_num[node] = low[node] = num++;
struct MagicComponents {
vis[node] = 1;
ll n_child = 0;
struct edge {
for (edge& e : adj[node]) {
ll u, v, id;
if (e.v == par) continue;
};
if (vis[e.v] == 0) {
++n_child;
ll num, n, edges;
e_stack.push_back(e);
dfs(e.v, node);
vector<ll> dfs_num, low, vis;
vector<ll> cuts; // art-vertices
low[node] = min(low[node], low[e.v]);
vector<edge> bridges; // bridge-edges
if (low[e.v] >= dfs_num[node]) {
vector<vector<edge>> adj; // graph
if (dfs_num[node] > 0 || n_child > 1)
vector<vector<edge>> bccs; // all the bccs where bcc[i] has all
cuts.push_back(node);
the edges inside it
if (low[e.v] > dfs_num[node]) {
deque<edge> e_stack;
bridges.push_back(e);
// Nodes are numberd from 0
pop(node);
} else pop(node);
MagicComponents(const ll& _n) : n(_n) {
}
adj.assign(n, vector<edge>());
} else if (vis[e.v] == 1) {
edges = 0;
low[node] = min(low[node], dfs_num[e.v]);
}
e_stack.push_back(e);
}
void add_edge(const ll& u, const ll& v) {
}
adj[u].push_back({u,v,edges});
vis[node] = 2;
adj[v].push_back({v,u,edges++});
}
}
void pop(const ll& u) {
void run(void) {
vector<edge> list;
vis.assign(n, 0);
for (;;) {
dfs_num.assign(n, 0);
edge e = e_stack.back();
low.assign(n, 0);
e_stack.pop_back();
bridges.clear();
list.push_back(e);
cuts.clear();
if (e.u == u) break;
bccs.clear();
}
e_stack = deque<edge>();
bccs.push_back(list);
num = 0;
}
for (ll i = 0; i < n; ++i) {
//# Make sure to call run before calling this function.
if (vis[i]) continue;
// Function returns a new graph such that all two connected
dfs(i, -1);
// components are compressed into one node and all bridges
}
// in the previous graph are the only edges connecting the
}
60

// components in the new tree. }


// map is an integer array that will store the mapping return g;
// for each node in the old graph into the new graph. //$ }
MagicComponents component_tree(vector<ll>& map) {
vector<char> vis(edges); //# Make sure to call run before calling this function.
for (const edge& e : bridges) // Function returns a new graph such that all biconnected
vis[e.id] = true; // components are compressed into one node. Cut nodes will
// be in multiple components, so these nodes will also have
ll num_comp = 0; // their own component by themselves. Edges in the graph
map.assign(map.size(), -1); // represent components to articulation points
for (ll i = 0; i < n; ++i) { // map is an integer array that will store the mapping
if (map[i] == -1) { // for each node in the old graph into the new graph.
deque<ll> q; // Cut points to their special component, and every other node
q.push_back(i); // to their specific component. //$
map[i] = num_comp; MagicComponents bcc_tree(vector<ll>& map) {
while (!q.empty()) { vector<ll> cut(n, -1);
ll node = q.front(); ll size = bccs.size();
q.pop_front(); for (const auto& i : cuts)
for (const edge& e : adj[node]) { map[i] = cut[i] = size++;
if (!vis[e.id] && map[e.v] ==
-1) { MagicComponents g(size);
vis[e.id] = true; vector<ll> used(n);
map[e.v] = num_comp; for (ll i = 0; i < bccs.size(); ++i) {
q.push_back(e.v); for (const edge& e : bccs[i]) {
} vector<ll> tmp = {e.u,e.v};
} for (const ll& node : tmp) {
} if (used[node] != i+1) {
} used[node] = i+1;
++num_comp; if (cut[node] != -1)
} g.add_edge(i,
cut[node]);
MagicComponents g(num_comp); else map[node] = i;
vis.assign(vis.size(), false); }
for (ll i = 0; i < n; ++i) { }
for (const edge& e : adj[i]) { }
if (!vis[e.id] && map[e.v] < map[e.u]) { }
vis[e.id] = true; return g;
// This is an edge in the bridge tree }
// we can add this edge to a new };
graph[] and this will
// be our new tree. We can now do
operations on this tree
g.add_edge(map[e.v], map[e.u]); 5.6 Bellman Ford
}
}
// Is there a negative cycle in the graph?
61

bool bellman(int src) st.push(u);


{ for(auto v: graph[u])
// Nodes are indexed from 1 {
for (int i = 1; i <= n; i++) if(!visited[v] && findCycle(v)) return true;
dist[i] = INF; else if(instack[v])
dist[src] = 0; {
for(int i = 2; i <= n; i++) cycle.pb({u,v});
{ st.pop();
for (int j = 0; j < edges.size(); j++) int t=u;
{ while(v!=t)
int u = edges[j].first; {
int v = edges[j].second; cycle.pb({st.top(),t});
ll weight = adj[u][v]; t=st.top();
if (dist[u]!=INF && dist[u] + weight < dist[v]) st.pop();
dist[v] = dist[u] + weight; }
} return true;
} }
for (int i = 0; i < edges.size(); i++) }
{ }
int u = edges[i].first; instack[u]=false;
int v = edges[i].second; st.pop();
ll weight = adj[u][v]; return false;
// True if neg-cylce exists }
if (dist[u]!=INF && dist[u] + weight < dist[v]) void find()
return true; {
} FOR(i,1,n+1)
return false; {
} ms(visited,false);
ms(instack,false);
if(findCycle(i))
{
5.7 Cycle in a Directed Graph // A cycle found starting from i
}
}
// Finds a cycle starting from a node u
}
const int N = 1005;
bool visited[N], instack[N];
stack<int> st;
vi graph[N]; int n;
vpii cycle; // contains the edges of the cycle
5.8 Dijkstra!
bool findCycle(int u)
{ struct road
if(!visited[u]) {
{ int u, w;
visited[u]=true; road (int a, int b)
instack[u]=true; {
62

u=a; w=b; for (int j=1; j<=m; j++)


} {
bool operator < (const road & p) const cin>>road_out>>road_cost;
{ g[i].pb(road_out);
return w>p.w; cost[i].pb(road_cost);
} }
}; }
scanf("%d%d", &start, &end);
int d[100], parent[100], start, end; dijkstra(n);
mvii g, cost; //cout<<d[end]<<endl;
g.clear(); cost.clear();
void dijkstra (int n) int current=end;
{ vi path;
ms(d,INF); while (current!=start)
ms(parent,-1); {
priority_queue <road> Q; path.pb(parent[current]);
Q.push(road(start,0)); current=parent[current];
d[start]=0; }
while (!Q.empty()) printf("Case %d: Path = ", cases++);
{ for (int j=(int)path.size()-1; j>-1; j--)
road t=Q.top(); cout<<path[j]<<" ";
Q.pop(); printf("%d; %d second delay\n", end, d[end]);
int u=t.u; }
for (ul i=0; i<g[u].size(); i++) return 0;
{ }
int v=g[u][i];
if (d[u]+cost[u][i]<d[v])
{
d[v]=d[u]+cost[u][i];
parent[v]=u; 5.9 Dominator Tree
Q.push(road(v,d[v]));
}
// Problem: LightOJ Sabotaging Contest
}
// n - number of cities, m - number of edges, (u,v,t) - edge and cost
}
// Each of the q lines gives a query of k cities n[1],n[2],...,n[k];
return;
// We have to find the number of nodes where if any one of them is
}
removed, the
// shortest path to 0 from n[1]...n[k] will be increased. We also have to
int main()
print
{
// the number of nodes which will be affected by such removal.
int n, m, road_out, road_cost, cases=1;
while (scanf("%d", &n) && n)
/* Solution
{
Run Dijkstra, build shortest path dag, take topsort order and
for (int i=1; i<=n; i++)
reverse it,
{
according to the reversed order add one edge at a time to build
cin>>m;
dominator tree
63

Finally, run dfs to find the level of each node and subtree size. int x=1;
Answer is the
(level of the lca of the nodes n[1]...n[k] + 1) and subtree size while(true)
of this ancestor {
*/ if((1<<(x+1))>L[p])
break;
vi graph[MAX], cost[MAX], dag[MAX], parent[MAX], Tree[MAX]; x++;
int u, v, t, n, m; }
int dist[MAX];
vector<int> all; FORr(i,x,0)
int L[MAX], table[MAX][18], sub[MAX]; {
bool visited[MAX]; if(L[p]-(1<<i) >= L[q])
p=table[p][i];
void clear() }
{
FOR(i,0,n) if(p==q) return p;
{
graph[i].clear(); FORr(i,x,0)
cost[i].clear(); {
dag[i].clear(); if(table[p][i]!=-1 && table[p][i]!=table[q][i])
parent[i].clear(); {
Tree[i].clear(); p=table[p][i];
sub[i]=0; q=table[q][i];
} }
all.clear(); }
ms(table,-1);
ms(visited,false); return table[p][0];
} }

void dfs(int u) void build(int curr)


{ {
sub[u]++; for(int j=1; (1<<j) < n; j++)
{
FOR(j,0,Tree[u].size()) if(table[curr][j-1]!=-1)
{ table[curr][j]=table[table[curr][j-1]][j-1];
int v=Tree[u][j]; }
dfs(v); }
sub[u]+=sub[v];
} void dijkstra()
} {
priority_queue<pii,vpii,greater<pii> > PQ;
int query(int p, int q) PQ.push(pii(0,0));
{ FOR(i,0,n) dist[i]=INF;
if(L[p]<L[q]) swap(p,q); dist[0]=0;
64

while(!PQ.empty()) }
{
pii t=PQ.top(); all.pb(u);
PQ.pop(); }

int u=t.second; void buildTree()


{
FOR(j,0,graph[u].size()) L[0]=0;
{ REVERSE(all);
int v=graph[u][j];
FOR(i,0,all.size())
if(dist[u]+cost[u][j]<dist[v]) {
{ int now=all[i];
dist[v]=dist[u]+cost[u][j];
PQ.push(pii(dist[v],v)); if(parent[now].size())
} {
} int anc=parent[now][0];
}
} FOR(j,1,parent[now].size())
{
void buildDag() anc=query(anc,parent[now][j]);
{ }
FOR(i,0,n)
{ L[now]=L[anc]+1;
FOR(j,0,graph[i].size()) table[now][0]=anc;
{ Tree[anc].pb(now);
int v=graph[i][j];
build(now);
if(dist[i]!=INF && dist[v]!=INF && }
dist[v]==dist[i]+cost[i][j]) }
{ }
dag[i].pb(v);
parent[v].pb(i); int main()
} {
} int test, cases=1;
}
} scanf("%d", &test);

void topsort(int u) while(test--)


{ {
visited[u]=true; scanf("%d%d", &n, &m);

FOR(j,0,dag[u].size()) FOR(i,0,m)
{ {
if(!visited[dag[u][j]]) topsort(dag[u][j]); scanf("%d%d%d", &u, &v, &t);
65

5.10 Edge Coloring


graph[u].pb(v);
graph[v].pb(u);
/* Problem: Given a bipartite graph, find out minimum number of colors
cost[u].pb(t);
to color all the edges such that no two adjacent edges have a same color
cost[v].pb(t);
assigned. Number of minimum colors equals the max degree of a vertex in
}
the bipartite graph. We also need to assign colors to each edge.
*/
dijkstra();
buildDag();
/* Comment by 300iq:
topsort(0);
Minimum answer is max degree.
buildTree();
If max degree 1, we can just color all edges in one color.
dfs(0);
Else, let’s split edges of the graph into two sets, such that max degree
will be (max degree + 1) / 2 in each of these two sets. You can
int q; scanf("%d", &q);
do it
with euler circuit, add some dummy vertex to the left and right part, and
printf("Case %d:\n", cases++);
connect with them vertices with odd degree (and maybe you need to connect
them if they have odd degree). And then color the edges into two colors by
while(q--)
order of euler circuit. Then separate all inital edges into two groups by
{
the
int x, u;
color. Then let’s solve recursively for these two sets, and then just
merge the answers.
scanf("%d", &x);
*/
int anc=-1;
struct edge
{
FOR(i,0,x)
int a, b;
{
};
scanf("%d", &u);
const int N = 1e6 + 7;
if(dist[u]==INF) continue;
vector <int> g[N];
int col[N];
if(anc==-1) anc=u;
vector <edge> glob_edges;
else anc=query(anc,u);
bool vis[N];
}
bool us[N];
int pp;
if(anc==-1) printf("0\n");
else printf("%d %d\n", L[anc]+1, sub[anc]);
void dfs(int v)
}
{
us[v] = true;
clear();
while (g[v].size() > 0)
}
{
return 0;
int ind = g[v].back();
}
g[v].pop_back();
if (vis[ind])
{
66

continue; mx = max(mx, (int) g[i].size());


} if (g[i].size() % 2)
vis[ind] = true; {
col[ind] = (pp ^= 1); ids.push_back(i);
dfs(glob_edges[ind].a == v ? glob_edges[ind].b : }
glob_edges[ind].a); }
} bool bad = false;
} for (int i = 0; i < x + y; i++)
{
vector <int> solve(vector <edge> e) if (g[i].size() > 1)
{ {
if (e.empty()) bad = true;
{ }
return {}; }
} if (!bad)
vector <int> l, r; {
for (auto c : e) vector <int> res(ind);
{ return res;
l.push_back(c.a); }
r.push_back(c.b); else
} {
sort(l.begin(), l.end()); vector <int> deg(x + y);
sort(r.begin(), r.end()); for (int v : ids)
l.resize(unique(l.begin(), l.end()) - l.begin()); {
r.resize(unique(r.begin(), r.end()) - r.begin()); if (v < x)
glob_edges.clear(); {
int x = (int) l.size() + 1, y = (int) r.size() + 1; glob_edges.push_back({v, x + y - 1});
for (int i = 0; i < x + y; i++) g[i].clear(); g[v].push_back(ind);
int ind = 0; g[x + y - 1].push_back(ind);
for (auto &c : e) ind++;
{ }
c.a = lower_bound(l.begin(), l.end(), c.a) - l.begin(); else
c.b = lower_bound(r.begin(), r.end(), c.b) - r.begin(); {
auto ret = c; glob_edges.push_back({v, x - 1});
ret.b += x; g[v].push_back(ind);
glob_edges.push_back(ret); g[x - 1].push_back(ind);
g[ret.a].push_back(ind); ind++;
g[ret.b].push_back(ind); }
ind++; }
} if (g[x - 1].size() % 2)
vector <int> ids; {
int mx = 0; glob_edges.push_back({x - 1, x + y - 1});
for (int i = 0; i < x + y; i++) g[x - 1].push_back(ind);
{ g[x + y - 1].push_back(ind);
us[i] = 0; ind++;
67

} }
for (int i = 0; i < ind; i++) return ans;
{ }
col[i] = -1; }
} int main()
for (int i = 0; i < ind; i++) {
{ vector <edge> e;
vis[i] = 0; int l, r, m;
} cin >> l >> r >> m;
for (int i = 0; i < x + y; i++) for (int i = 0; i < m; i++)
{ {
if (!g[i].empty()) int a, b;
{ cin >> a >> b;
dfs(i); a--, b--;
} e.push_back({a, b});
} }
vector <edge> to_l, to_r; auto ret = solve(e);
vector <int> cols; cout << *max_element(ret.begin(), ret.end()) + 1 << ’\n’;
for (int i = 0; i < (int) e.size(); i++) for (int c : ret)
{ {
cols.push_back(col[i]); cout << c + 1 << ’\n’;
if (col[i] == 0) }
{ }
to_l.push_back(e[i]);
}
else
{ 5.11 Edmonds Matching
to_r.push_back(e[i]);
}
/*
}
* Algorithm: Edmonds Blossom Maximum Matching in Generel Graph
auto x = solve(to_l);
* Order : O( N^4 )
auto y = solve(to_r);
* Note : vertx must be indexing based
int mx = *max_element(x.begin(), x.end()) + 1;
*/
int p_x = 0, p_y = 0;
vector <int> ans;
#include<stdio.h>
for (int i = 0; i < (int) e.size(); i++)
#include<string.h>
{
using namespace std;
if (cols[i] == 0)
#define MAX_V 103
{
#define MAX_E MAX_V*MAX_V
ans.push_back(x[p_x++]);
}
long nV,nE,Match[MAX_V];
else
long Last[MAX_V], Next[MAX_E], To[MAX_E];
{
long eI;
ans.push_back(mx + y[p_y++]);
long q[MAX_V], Pre[MAX_V], Base[MAX_V];
}
bool Hash[MAX_V], Blossom[MAX_V], Path[MAX_V];
68

void Insert(long u, long v) { long Bfs( long p ){


To[eI] = v, Next[eI] = Last[u], Last[u] = eI++; memset( Pre,-1,sizeof(Pre));
To[eI] = u, Next[eI] = Last[v], Last[v] = eI++; memset( Hash,0,sizeof(Hash));
} long i;
for( i=1;i<=nV;i++ ) Base[i] = i;
long Find_Base(long u, long v) { q[1] = p, Hash[p] = 1;
memset( Path,0,sizeof(Path)); for (long head=1, rear=1; head<=rear; head++) {
for (;;) { long u = q[head];
Path[u] = 1; for (long e=Last[u]; e!=-1; e=Next[e]) {
if (Match[u] == -1) break; long v = To[e];
u = Base[Pre[Match[u]]]; if (Base[u]!=Base[v] and v!=Match[u]) {
} if (v==p or (Match[v]!=-1 and Pre[Match[v]]!=-1)) {
while (Path[v] == 0) v = Base[Pre[Match[v]]]; long b = Contract(u, v);
return v; for( i=1;i<=nV;i++ ) if (Blossom[Base[i]]==1) {
} Base[i] = b;
if (!Hash[i]) {
void Change_Blossom(long b, long u) { Hash[i] = 1;
while (Base[u] != b) { q[++rear] = i;
long v = Match[u]; }
Blossom[Base[u]] = Blossom[Base[v]] = 1; }
u = Pre[v]; } else if (Pre[v]==-1) {
if (Base[u] != b) Pre[u] = v; Pre[v] = u;
} if (Match[v]==-1) {
} Augment(v);
return 1;
long Contract(long u, long v) { }
memset( Blossom,0,sizeof(Blossom)); else {
long b = Find_Base(Base[u], Base[v]); q[++rear] = Match[v];
Change_Blossom(b, u); Hash[Match[v]] = 1;
Change_Blossom(b, v); }
if (Base[u] != b) Pre[u] = v; }
if (Base[v] != b) Pre[v] = u; }
return b; }
} }
return 0;
void Augment(long u) { }
while (u != -1) {
long v = Pre[u]; long Edmonds_Blossom( void ){
long k = Match[v]; long i,Ans = 0;
Match[u] = v; memset( Match,-1,sizeof(Match));
Match[v] = u; for( i=1;i<=nV;i++ ) if (Match[i] == -1) Ans += Bfs(i);
u = k; return Ans;
} }
}
69

}
int main( void ){ }
eI = 0; if (n > m) m = n;
memset( Last,-1,sizeof(Last));
ll i, j, a, b, c, d, r, w;
} for (i = 1; i <= n; i++){
P[0] = i, b = 0;
for (j = 0; j <= m; j++) minv[j] = inf, visited[j] = false;

5.12 Faster Weighted Matching do{


visited[b] = true;
a = P[b], d = 0, w = inf;
#include <bits/stdtr1c++.h>
for (j = 1; j <= m; j++){
#define MAX 1002
if (!visited[j]){
#define MAXIMIZE +1
r = ar[a][j] - U[a] - V[j];
#define MINIMIZE -1
if (r < minv[j]) minv[j] = r, way[j] = b;
if (minv[j] < w) w = minv[j], d = j;
using ll = long long;
}
}
#define inf (~0U >> 1)
#define clr(ar) memset(ar, 0, sizeof(ar))
for (j = 0; j <= m; j++){
#define read() freopen("lol.txt", "r", stdin)
if (visited[j]) U[P[j]] += w, V[j] -= w;
#define dbg(x) cout << #x << " = " << x << endl
else minv[j] -= w;
#define ran(a, b) ((((rand() << 15) ^ rand()) % ((b) - (a) + 1)) + (a))
}
b = d;
using namespace std;
} while (P[b] != 0);
/* call:
wm::hungarian(number_of_nodes_on_left,on_right,matrix_of_weights,flag)
do{
match[i] contains matched right node with i-th left node
d = way[b];
*/
P[b] = P[d], b = d;
namespace wm{ /// hash = 581023
} while (b != 0);
bool visited[MAX];
}
ll U[MAX], V[MAX], P[MAX], way[MAX], minv[MAX], match[MAX],
for (j = 1; j <= m; j++) match[P[j]] = j;
ar[MAX][MAX];
return (flag == MINIMIZE) ? -V[0] : V[0];
/// n = number of row and m = number of columns in 1 based, flag =
}
MAXIMIZE or MINIMIZE
}
/// match[i] contains the column to which row i is matched
ll hungarian(ll n, ll m, ll mat[MAX][MAX], ll flag){
clr(U), clr(V), clr(P), clr(ar), clr(way);

for (ll i = 1; i <= n; i++){


5.13 Global Minimum Cut
for (ll j = 1; j <= m; j++){
ar[i][j] = mat[i][j]; /*Given an undirected graph G = (V, E), we define a cut of G to be a
if (flag == MAXIMIZE) ar[i][j] = -ar[i][j]; partition
70

of V into two non-empty sets A and B. Earlier, when we looked at network used[last] = true;
flows, we worked with the closely related definition of an s-t cut: cut.push_back(last);
there, given if (best_weight == -1 || w[last] < best_weight) {
a directed graph G = (V, E) with distinguished source and sink nodes s best_cut = cut;
and t, best_weight = w[last];
an s-t cut was defined to be a partition of V into sets A and B such that }
s A } else {
and t B. Our definition now is slightly different, since the for (ll j = 0; j < N; j++)
underlying graph w[j] += weights[last][j];
is now undirected and there is no source or sink. added[last] = true;
This problem can be solved by max-flow. First we remove undirected edges }
and replace }
them by two opposite directed edge. Now we fix a node s. Then we consider }
each of return make_pair(best_weight, best_cut);
the n nodes as t and run max-flow. The minimum of those values is the }
answer. };
This is O(n^3).
*/ int main() {
ll T;
struct Stoer_Wagner{ sl(T);
vector <vl> weights; f(t,1,T+1){
Stoer_Wagner(ll N){ ll N,M;
weights.resize(N,vl(N,0)); sll(N,M);
} Stoer_Wagner SW(N);
void AddEdge(ll from, ll to, ll cap){ f(i,0,M){
weights[from][to]+=cap; ll a,b,c;
weights[to][from]+=cap; slll(a,b,c);
} SW.AddEdge(a-1,b-1,c);
pair<ll, vl> GetMinCut() { }
ll N = weights.size(); pf("Case #%lld: ",t); pfl(SW.GetMinCut().x);
vl used(N), cut, best_cut; }
ll best_weight = -1; }

for (ll phase = N-1; phase >= 0; phase--) {


vl w = weights[0];
vl added = used; 5.14 Hopcroft Karp
ll prev, last = 0;
for (ll i = 0; i < phase; i++) {
vector< int > graph[MAX];
prev = last;
int n, m, match[MAX], dist[MAX];
last = -1;
int NIL=0;
for (ll j = 1; j < N; j++)
if (!added[j] && (last == -1 || w[j] > w[last])) last = j;
bool bfs()
if (i == phase-1) {
{
for (ll j = 0; j < N; j++) weights[prev][j] += weights[last][j];
int i, u, v, len;
for (ll j = 0; j < N; j++) weights[j][prev] = weights[prev][j];
queue< int > Q;
71

for(i=1; i<=n; i++) return true;


{ }
if(match[i]==NIL) }
{ }
dist[i] = 0; dist[u] = INF;
Q.push(i); return false;
} }
else dist[i] = INF; return true;
} }
dist[NIL] = INF;
while(!Q.empty()) int hopcroft_karp()
{ {
u = Q.front(); Q.pop(); int matching = 0, i;
if(u!=NIL) // match[] is assumed NIL for all vertex in graph
{ // All nodes on left and right should be distinct
len = graph[u].size(); while(bfs())
for(i=0; i<len; i++) for(i=1; i<=n; i++)
{ if(match[i]==NIL && dfs(i))
v = graph[u][i]; matching++;
if(dist[match[v]]==INF) return matching;
{ }
dist[match[v]] = dist[u] + 1;
Q.push(match[v]); void clear()
} {
} FOR(j,0,MAX) graph[j].clear();
} ms(match,NIL);
} }
return (dist[NIL]!=INF);
} int main()
{
bool dfs(int u) // ios_base::sync_with_stdio(0);
{ // cin.tie(NULL); cout.tie(NULL);
int i, v, len; // freopen("in.txt","r",stdin);
if(u!=NIL)
{ // SPOJ - Fast Maximum Matching
len = graph[u].size();
for(i=0; i<len; i++) int p, x, y;
{
v = graph[u][i]; scanf("%d%d%d", &n, &m, &p);
if(dist[match[v]]==dist[u]+1)
{ FOR(i,0,p)
if(dfs(match[v])) {
{ scanf("%d%d", &x, &y);
match[v] = u; graph[x].pb(n+y);
match[u] = v; graph[n+y].pb(x);
72

}
T matching() { // maximum weight matching
printf("%d\n", hopcroft_karp()); fill(lx + 1, lx + 1 + n, numeric_limits<T>::lowest());
ms(ly,0);
return 0; ms(match,0);
} for (int i = 1; i <= n; ++i) {
for (int j = 1; j <= m; ++j) lx[i] = max(lx[i], g[i][j]);
}
for (int k = 1; k <= n; ++k) {
5.15 Hungarian Weighted Matching fill(slack + 1, slack + 1 + m, numeric_limits<T>::max());
while (true) {
ms(vx,0);
// hungarian weighted matching algo
ms(vy,0);
// finds the max cost of max matching, to find mincost, add edges as
if (find(k)) break;
negatives
else {
// Nodes are indexed from 1 on both sides
T delta = numeric_limits<T>::max();
template<typename T>
for (int i = 1; i <= m; ++i) {
struct KuhnMunkras { // n for left, m for right
if (!vy[i]) delta = min(delta, slack[i]);
int n, m, match[maxM];
}
T g[maxN][maxM], lx[maxN], ly[maxM], slack[maxM];
for (int i = 1; i <= n; ++i) {
bool vx[maxN], vy[maxM];
if (vx[i]) lx[i] -= delta;
}
void init(int n_, int m_) {
for (int i = 1; i <= m; ++i) {
ms(g,0); n = n_, m = m_;
if (vy[i]) ly[i] += delta;
}
if (!vy[i]) slack[i] -= delta;
}
void add(int u, int v, T w) {
}
g[u][v] = w;
}
}
}
T result = 0;
bool find(int x) {
for (int i = 1; i <= n; ++i) result += lx[i];
vx[x] = true;
for (int i = 1; i <= m; ++i) result += ly[i];
for (int y = 1; y <= m; ++y) {
return result;
if (!vy[y]) {
}
T delta = lx[x] + ly[y] - g[x][y];
};
if (delta==0) {
vy[y] = true;
if (match[y] == 0 || find(match[y])) {
match[y] = x;
return true;
5.16 Johnson’s Algorithm
}
} else slack[y] = min(slack[y], delta); /// Johnson’s algorithm for all pair shortest paths in sparse graphs
} /// Complexity: O(N * M) + O(N * M * log(N))
}
return false; const long long INF = (1LL << 60) - 666;
}
73

struct edge{ for (int i = 0; i < adj[u].size(); i++){


int u, v; int v = adj[u][i].first;
long long w; long long w = adj[u][i].second;
edge(){}
edge(int u, int v, long long w) : u(u), v(v), w(w){} if ((temp[u] + w) < temp[v]){
S.erase(make_pair(temp[v], v));
void print(){ temp[v] = temp[u] + w;
cout << "edge " << u << " " << v << " " << w << endl; dis[v] = dis[u] + w;
} S.insert(make_pair(temp[v], v));
}; }
}
bool bellman_ford(int n, int src, vector <struct edge> E, vector <long }
long>& dis){ return dis;
dis[src] = 0; }
for (int i = 0; i <= n; i++){
int flag = 0; void johnson(int n, long long ar[MAX][MAX], vector <struct edge> E){
for (auto e: E){ vector <long long> potential(n + 1, INF);
if ((dis[e.u] + e.w) < dis[e.v]){ for (int i = 1; i <= n; i++) E.push_back(edge(0, i, 0));
flag = 1;
dis[e.v] = dis[e.u] + e.w; assert(bellman_ford(n, 0, E, potential));
} for (int i = 1; i <= n; i++) E.pop_back();
}
if (flag == 0) return true; for (int i = 1; i <= n; i++){
} vector <long long> dis = dijkstra(n, i, E, potential);
return false; for (int j = 1; j <= n; j++){
} ar[i][j] = dis[j];
}
vector <long long> dijkstra(int n, int src, vector <struct edge> E, }
vector <long long> potential){ }
set<pair<long long, int> > S;
vector <long long> dis(n + 1, INF); long long ar[MAX][MAX];
vector <long long> temp(n + 1, INF);
vector <pair<int, long long> > adj[n + 1]; int main(){
vector <struct edge> E;
dis[src] = temp[src] = 0; E.push_back(edge(1, 2, 2));
S.insert(make_pair(temp[src], src)); E.push_back(edge(2, 3, -15));
for (auto e: E){ E.push_back(edge(1, 3, -10));
adj[e.u].push_back(make_pair(e.v, e.w));
} int n = 3;
johnson(n, ar, E);
while (!S.empty()){ for (int i = 1; i <= n; i++){
pair<long long, int> cur = *(S.begin()); for (int j = 1; j <= n; j++){
S.erase(cur); printf("%d %d = %lld\n", i, j, ar[i][j]);
}
int u = cur.second; }
74

return 0; 5.18 LCA 2


}

int n, lef[MAX], rig[MAX], dist[MAX], table[2 * MAX][18];


vi graph[MAX], stk;
5.17 Kruskal
void dfs(int u, int p, int d)
struct edge {
{ dist[u] = d;
int u, v, w; lef[u] = rig[u] = stk.size();
bool operator < (const edge & p) const stk.pb(u);
{ for (auto v : graph[u])
return w < p.w; {
} if (v == p) continue;
}; dfs(v, u, d + 1);
edge get; rig[u] = stk.size();
int parent[100]; stk.pb(u);
vector <edge> e; }
int find(int r) }
{
if (parent[r] == r) int lca(int u, int v)
return r; {
return parent[r] = find(parent[r]); int l = min(lef[u], lef[v]);
} int r = max(rig[u], rig[v]);
int mst(int n) int g = __builtin_clz(r - l + 1) ^ 31;
{ return dist[table[l][g]] < dist[table[r - (1 << g) + 1][g]] ?
sort(e.begin(), e.end()); table[l][g] : table[r - (1 << g) + 1][g];
for (int i = 1; i <= n; i++) }
parent[i] = i;
int cnt = 0, s = 0; void build()
for (int i = 0; i < (int)e.size(); i++) {
{ dfs(1, -1, 0);
int u = find(e[i].u);
int v = find(e[i].v); for (int i = 0; i < stk.size(); i++) table[i][0] = stk[i];
if (u != v) for (int j = 1; (1 << j) <= stk.size(); j++)
{ {
parent[u] = v; for (int i = 0; i + (1 << j) <= stk.size(); i++)
cnt++; {
s += e[i].w; table[i][j] = (dist[table[i][j - 1]] < dist[table[i
if (cnt == n - 1) + (1 << (j - 1))][j - 1]] ?
break; table[i][j - 1] : table[i + (1 << (j -
} 1))][j - 1]);
} }
} }
}
75

5.19 LCA }
}
vi graph[100];
return P[p];
int P[100], L[100], table[100][20];
}
void dfs(int from, int to, int depth)
{
void build(int n)
P[to]=from;
{
L[to]=depth;
ms(table,-1);
FOR(i,0,(int)graph[to].size())
{
FOR(i,0,n)
int v=graph[to][i];
table[i][0]=P[i];
if(v==from)
continue;
for(int j=1; 1<<j < n; j++)
dfs(to,v,depth+1);
{
}
for(int i=0; i<n; i++)
}
{
if(table[i][j-1]!=-1)
int query(int n, int p, int q)
table[i][j]=table[table[i][j-1]][j-1];
{
}
if(L[p]<L[q]) swap(p,q);
}
}
int x=1;

while(true)
{
if((1<<(x+1))>L[p])
5.20 Manhattan MST
break;
x++; int n;
} vi graph[MAX], cost[MAX];

FORr(i,x,0) struct point {


{ int x, y, index;
if(L[p]-(1<<i) >= L[q]) bool operator<(const point &p) const { return x == p.x ? y < p.y :
p=table[p][i]; x < p.x; }
} } p[MAX];

if(p==q) return p; struct node {


int value, p;
FORr(i,x,0) } T[MAX];
{
if(table[p][i]!=-1 && table[p][i]!=table[q][i]) struct UnionFind {
{ int p[MAX];
p=table[p][i]; void init(int n) { for (int i = 1; i <= n; i++) p[i] = i; }
q=table[q][i]; int find(int u) { return p[u] == u ? u : p[u] = find(p[u]); }
76

void Union(int u, int v) { p[find(u)] = find(v); } }


} dsu;
int manhattan() {
struct edge { for (int i = 1; i <= n; ++i) p[i].index = i;
int u, v, c; for (int dir = 1; dir <= 4; ++dir) {
bool operator < (const edge &p) const { if (dir == 2 || dir == 4) {
return c < p.c; for (int i = 1; i <= n; ++i) swap(p[i].x, p[i].y);
} } else if (dir == 3) {
}; for (int i = 1; i <= n; ++i) p[i].x = -p[i].x;
vector<edge> edges; }
sort(p + 1, p + 1 + n);
int query(int x) { vector<int> v; static int a[MAX];
int r = inf, p = -1; for (int i = 1; i <= n; ++i) a[i] = p[i].y - p[i].x,
for (; x <= n; x += (x & -x)) if (T[x].value < r) r = T[x].value, v.push_back(a[i]);
p = T[x].p; sort(v.begin(), v.end());
return p; v.erase(unique(v.begin(), v.end()), v.end());
} for (int i = 1; i <= n; ++i) a[i] = lower_bound(v.begin(),
v.end(), a[i]) - v.begin() + 1;
void modify(int x, int w, int p) { for (int i = 1; i <= n; ++i) T[i].value = inf, T[i].p = -1;
for (; x > 0; x -= (x & -x)) if (T[x].value > w) T[x].value = w, for (int i = n; i >= 1; --i) {
T[x].p = p; int pos = query(a[i]);
} if (pos != -1) add(p[i].index, p[pos].index,
dist(p[i], p[pos]));
int dist(point &a, point &b) { modify(a[i], p[i].x + p[i].y, i);
return abs(a.x - b.x) + abs(a.y - b.y); }
} }
}
void add(int u, int v, int c) {
edges.pb({u, v, c}); int main()
} {
int test, cases = 1;
void kruskal() {
dsu.init(n); scanf("%d", &n);
SORT(edges);
for (edge e : edges) { // points
int u = e.u, v = e.v, c = e.c; FOR(i,1,n+1)
// cout<<u<<" "<<v<<" "<<c<<endl; {
if (dsu.find(u) != dsu.find(v)) { scanf("%d%d", &p[i].x, &p[i].y);
graph[u].push_back(v); }
graph[v].push_back(u);
cost[u].push_back(c); manhattan();
cost[v].push_back(c); kruskal();
dsu.Union(u, v);
} // graph = manhattan mst adjacency list
} // cost = corresponding cost of edges
77

return 0; #define fst first


} #define snd second
#define all(c) ((c).begin()), ((c).end())

const long long INF = (1ll << 50);


5.21 Max Flow Dinic 2 struct graph {
typedef long long flow_type;
struct edge {
//
int src, dst;
// Dinic’s maximum flow
flow_type capacity, flow;
//
size_t rev;
// Description:
};
// Given a directed network G = (V, E) with edge capacity c: E->R.
int n;
// The algorithm finds a maximum flow.
vector<vector<edge>> adj;
//
graph(int n) : n(n), adj(n) { }
// Algorithm:
void add_edge(int src, int dst, flow_type capacity) {
// Dinic’s blocking flow algorithm.
adj[src].push_back({src, dst, capacity, 0,
//
adj[dst].size()});
// Complexity:
adj[dst].push_back({dst, src, 0, 0, adj[src].size() - 1});
// O(n^2 m), but very fast in practice.
}
// In particular, for a unit capacity graph,
flow_type max_flow(int s, int t) {
// it runs in O(m min{m^{1/2}, n^{2/3}}).
vector<int> level(n), iter(n);
//
function<int(void)> levelize = [&]() { // foward levelize
// Verified:
level.assign(n, -1); level[s] = 0;
// SPOJ FASTFLOW
queue<int> Q; Q.push(s);
//
while (!Q.empty()) {
// Reference:
int u = Q.front(); Q.pop();
// E. A. Dinic (1970):
if (u == t) break;
// Algorithm for solution of a problem of maximum flow in networks with
for (auto &e : adj[u]) {
power estimation.
if (e.capacity > e.flow &&
// Soviet Mathematics Doklady, vol. 11, pp. 1277-1280.
level[e.dst] < 0) {
//
Q.push(e.dst);
// B. H. Korte and J. Vygen (2008):
level[e.dst] = level[u] + 1;
// Combinatorial Optimization: Theory and Algorithms.
}
// Springer Berlin Heidelberg.
}
//
}
return level[t];
#include <iostream>
};
#include <vector>
function<flow_type(int, flow_type)> augment = [&](int u,
#include <cstdio>
flow_type cur) {
#include <queue>
if (u == t) return cur;
#include <algorithm>
for (int &i = iter[u]; i < adj[u].size(); ++i) {
#include <functional>
edge &e = adj[u][i], &r = adj[e.dst][e.rev];
using namespace std;
78

if (e.capacity > e.flow && level[u] < bound R, replace the edge with capacity R-L.
level[e.dst]) { Let, sum[u] = (sum of lowerbounds of ingoing edges to u) - (sum of
flow_type f = augment(e.dst, min(cur, lowerbounds of
e.capacity - e.flow)); outgoing edges from u),
if (f > 0) { here u can be all nodes of the graph, including s and t. For all such u,
e.flow += f; if sum[u]>0
r.flow -= f; add edge (s’,u,sum[u]), add sum[u] to a value ’total’, otherwise add edge
return f; (u,t’,-sum[u]).
} Lastly add (t,s,INF). Then run max-flow from s’ to t’.
} A feasible flow won’t exist if flow from s’ to t’ < total, otherwise
} if we run a maxflow from s to t (not s’ to t’), we get the max-flow
return flow_type(0); satisfying the bounds.
}; ***
for (int u = 0; u < n; ++u) // initialize To find the minimal flow satisfying the bounds, we do a binary search on
for (auto &e : adj[u]) e.flow = 0; the
capacity of the edge (t,s,INF). Each time during binary search, we check
flow_type flow = 0; if a feasible
while (levelize() >= 0) { flow exists or not with current capacity of (t,s,INF) edge.
fill(all(iter), 0); */
for (flow_type f; (f = augment(s, INF)) > 0; )
flow += f; struct Edge
} {
return flow; int to, rev, f, cap;
} };
};

int main() { class Dinic


for (int n, m; scanf("%d %d", &n, &m) == 2; ) { {
graph g(n); public:
for (int i = 0; i < m; ++i) {
int u, v, w; int dist[MAX], q[MAX], work[MAX], src, dest;
scanf("%d %d %d", &u, &v, &w); vector<Edge> graph[MAX];
//g.add_edge(u, v, w); // MAX equals to node_number
g.add_edge(u - 1, v - 1, w);
} void init(int sz)
printf("%lld\n", g.max_flow(0, n - 1)); {
} FOR(i,0,sz+1) graph[i].clear();
} }

void clearFlow(int sz)


{
5.22 Max Flow Dinic FOR(i,0,sz+1)
{
FOR(j,0,graph[i].size())
/*
graph[i][j].f=0;
Add s’, t’, s, t to the graph. For edges with lower bound L and upper
79

}
} for(int &i=work[u]; i<(int)graph[u].size(); i++)
{
void addEdge(int s, int t, int cap) Edge &e=graph[u][i];
{
Edge a={t,(int)graph[t].size(),0,cap}; if(e.cap<=e.f) continue;
Edge b={s,(int)graph[s].size(),0,0};
int v=e.to;
// If our graph has bidirectional edges
// Capacity for the Edge b will equal to cap if(dist[v]==dist[u]+1)
// For directed, it is 0 {
int df=dfs(v,min(f,e.cap-e.f));
graph[s].emplace_back(a);
graph[t].emplace_back(b); if(df>0)
} {
e.f+=df;
bool bfs() graph[v][e.rev].f-=df;
{
ms(dist,-1); return df;
dist[src]=0; }
int qt=0; }
q[qt++]=src; }

for(int qh=0; qh<qt; qh++) return 0;


{ }
int u=q[qh];
int maxFlow(int _src, int _dest)
for(auto &e: graph[u]) {
{ src=_src;
int v=e.to; dest=_dest;

if(dist[v]<0 && e.f<e.cap) int result=0;


{
dist[v]=dist[u]+1; while(bfs())
q[qt++]=v; {
} // debug;
} fill(work,work+MAX,0);
} while(int delta=dfs(src,INF))
result+=delta;
return dist[dest]>=0; }
}
return result;
int dfs(int u, int f) }
{ };
if(u==dest) return f;
80

5.23 Max Flow Edmond Karp edge &rev(edge e) { return adj[e.dst][e.rev]; };

vector<vector<edge>> adj;
//
graph(int n) : n(n), adj(n) { }
// Maximum Flow (Edmonds-Karp)
void add_edge(int src, int dst, int capacity) {
//
adj[src].push_back({src, dst, capacity, 0,
// Description:
adj[dst].size()});
// Given a directed network G = (V, E) with edge capacity c: E->R.
adj[dst].push_back({dst, src, 0, 0, adj[src].size() - 1});
// The algorithm finds a maximum flow.
}
//
int max_flow(int s, int t) {
// Algorithm:
for (int u = 0; u < n; ++u)
// Edmonds-Karp shortest augmenting path algorithm.
for (auto &e : adj[u]) e.residue = e.capacity;
//
int total = 0;
// Complexity:
while (1) {
// O(n m^2)
vector<int> prev(n, -1); prev[s] = -2;
//
queue<int> que; que.push(s);
// Verified:
while (!que.empty() && prev[t] == -1) {
// AOJ GRL_6_A: Maximum Flow
int u = que.front(); que.pop();
//
for (edge &e : adj[u]) {
// Reference:
if (prev[e.dst] == -1 && e.residue >
// B. H. Korte and J. Vygen (2008):
0) {
// Combinatorial Optimization: Theory and Algorithms.
prev[e.dst] = e.rev;
// Springer Berlin Heidelberg.
que.push(e.dst);
//
}
}
#include <iostream>
}
#include <vector>
if (prev[t] == -1) break;
#include <queue>
int inc = INF;
#include <cstdio>
for (int u = t; u != s; u = adj[u][prev[u]].dst)
#include <algorithm>
inc = min(inc, rev(adj[u][prev[u]]).residue);
#include <functional>
for (int u = t; u != s; u = adj[u][prev[u]].dst) {
adj[u][prev[u]].residue += inc;
using namespace std;
rev(adj[u][prev[u]]).residue -= inc;
}
#define fst first
total += inc;
#define snd second
} // { u : visited[u] == true } is s-side
#define all(c) ((c).begin()), ((c).end())
return total;
}
const int INF = 1 << 30;
};
struct graph {
int n;
int main() {
struct edge {
for (int n, m; scanf("%d %d", &n, &m) == 2; ) {
int src, dst;
graph g(n);
int capacity, residue;
for (int i = 0; i < m; ++i) {
size_t rev;
int u, v, w;
};
81

scanf("%d %d %d", &u, &v, &w); const int INF = 1 << 30;
g.add_edge(u, v, w); struct graph {
} typedef long long flow_type;
printf("%d\n", g.max_flow(0, n - 1)); struct edge {
} int src, dst;
} flow_type capacity, flow;
size_t rev;
};
int n;
5.24 Max Flow Ford Fulkerson vector<vector<edge>> adj;
graph(int n) : n(n), adj(n) { }
void add_edge(int src, int dst, flow_type capacity) {
//
adj[src].push_back({src, dst, capacity, 0,
// Ford-Fulkerson’s maximum flow
adj[dst].size()});
//
adj[dst].push_back({dst, src, 0, 0, adj[src].size() - 1});
// Description:
}
// Given a directed network G = (V, E) with edge capacity c: E->R.
int max_flow(int s, int t) {
// The algorithm finds a maximum flow.
vector<bool> visited(n);
//
function<flow_type(int, flow_type)> augment = [&](int u,
// Algorithm:
flow_type cur) {
// Ford-Fulkerson’s augmenting path algorithm
if (u == t) return cur;
//
visited[u] = true;
// Complexity:
for (auto &e : adj[u]) {
// O(m F), where F is the maximum flow value.
if (!visited[e.dst] && e.capacity > e.flow) {
//
flow_type f = augment(e.dst,
// Verified:
min(e.capacity - e.flow, cur));
// AOJ GRL_6_A: Maximum Flow
if (f > 0) {
//
e.flow += f;
// Reference:
adj[e.dst][e.rev].flow -= f;
// B. H. Korte and J. Vygen (2008):
return f;
// Combinatorial Optimization: Theory and Algorithms.
}
// Springer Berlin Heidelberg.
}
//
}
return flow_type(0);
#include <iostream>
};
#include <vector>
for (int u = 0; u < n; ++u)
#include <cstdio>
for (auto &e : adj[u]) e.flow = 0;
#include <algorithm>
#include <functional>
flow_type flow = 0;
while (1) {
using namespace std;
fill(all(visited), false);
flow_type f = augment(s, INF);
#define fst first
if (f == 0) break;
#define snd second
flow += f;
#define all(c) ((c).begin()), ((c).end())
}
82

return flow; #include <queue>


} #include <algorithm>
}; #include <functional>

int main() { using namespace std;


for (int n, m; scanf("%d %d", &n, &m) == 2; ) {
graph g(n); #define fst first
for (int i = 0; i < m; ++i) { #define snd second
int u, v, w; #define all(c) ((c).begin()), ((c).end())
scanf("%d %d %d", &u, &v, &w);
g.add_edge(u, v, w); const long long INF = (1ll << 50);
} struct graph {
printf("%d\n", g.max_flow(0, n - 1)); typedef long long flow_type;
} struct edge {
} int src, dst;
flow_type capacity, flow;
size_t rev;
};
5.25 Max Flow Goldberg Tarjan int n;
vector<vector<edge>> adj;
graph(int n) : n(n), adj(n) { }
//
// Maximum Flow (Goldberg-Tarjan, aka. Push-Relabel, Preflow-Push)
void add_edge(int src, int dst, int capacity) {
//
adj[src].push_back({src, dst, capacity, 0,
// Description:
adj[dst].size()});
// Given a directed network G = (V, E) with edge capacity c: E->R.
adj[dst].push_back({dst, src, 0, 0, adj[src].size() - 1});
// The algorithm finds a maximum flow.
}
//
// Algorithm:
flow_type max_flow(int s, int t) {
// Goldberg-Tarjan’s push-relabel algorithm with gap-heuristics.
vector<flow_type> excess(n);
//
vector<int> dist(n), active(n), count(2 * n);
// Complexity:
queue<int> Q;
// O(n^3)
auto enqueue = [&](int v) {
//
if (!active[v] && excess[v] > 0) { active[v] =
// Verified:
true; Q.push(v); }
// SPOJ FASTFLOW
};
//
auto push = [&](edge & e) {
// Reference:
flow_type f = min(excess[e.src], e.capacity -
// B. H. Korte and Jens Vygen (2008):
e.flow);
// Combinatorial Optimization: Theory and Algorithms.
if (dist[e.src] <= dist[e.dst] || f == 0) return;
// Springer Berlin Heidelberg.
e.flow += f;
//
adj[e.dst][e.rev].flow -= f;
excess[e.dst] += f;
#include <iostream>
excess[e.src] -= f;
#include <vector>
enqueue(e.dst);
#include <cstdio>
83

}; for (int n, m; scanf("%d %d", &n, &m) == 2; ) {


graph g(n);
dist[s] = n; active[s] = active[t] = true; for (int i = 0; i < m; ++i) {
count[0] = n - 1; count[n] = 1; int u, v, w;
for (int u = 0; u < n; ++u) scanf("%d %d %d", &u, &v, &w);
for (auto &e : adj[u]) e.flow = 0; g.add_edge(u, v, w);
for (auto &e : adj[s]) { }
excess[s] += e.capacity; printf("%d\n", g.max_flow(0, n - 1));
push(e); }
} }
while (!Q.empty()) {
int u = Q.front(); Q.pop();
active[u] = false;
5.26 Maximum Bipartite Matching and Min Vertex Cover
for (auto &e : adj[u]) push(e);
if (excess[u] > 0) {
int n, m, p; // n = # of nodes on left, m = # of nodes on right
if (count[dist[u]] == 1) {
vi bp[N]; // bipartite graph
int k = dist[u]; // Gap Heuristics
int matched[N], revmatch[N];
for (int v = 0; v < n; v++) {
bool seen[N], visited[2][N];
if (dist[v] < k) continue;
count[dist[v]]--;
bool trymatch(int u)
dist[v] = max(dist[v], n + 1);
{
count[dist[v]]++;
FOR(j,0,bp[u].size())
enqueue(v);
{
}
int v=bp[u][j];
} else {
if(seen[v]) continue;
count[dist[u]]--; // Relabel
dist[u] = 2 * n;
seen[v]=true;
for (auto &e : adj[u])
if (e.capacity > e.flow)
// v is on right, u on left
dist[u] = min(dist[u],
if(matched[v]<0 || trymatch(matched[v]))
dist[e.dst] + 1);
{
count[dist[u]]++;
matched[v]=u;
enqueue(u);
revmatch[u]=v;
}
return true;
}
}
}
}
flow_type flow = 0;
return false;
for (auto e : adj[s]) flow += e.flow;
}
return flow;
}
// 0 based
};
int maxbpm(int sz)
{
int main() {
ms(matched,-1);
84

ms(revmatch,-1); // for min-vertex-cover }


// The following probably optimizes for large graphs
int ret=0; bool trymatch(int u)
{
FOR(i,0,sz) // tag is used so that we don’t clear seen each time
{ if(seen[u]==tag) return false;
ms(seen,false); seen[u]=tag;
if(trymatch(i)) ret++; FOR(j,0,bp[u].size())
} {
int v=bp[u][j];
return ret; // first we only consider any matched[v]==-1 case
} if(matched[v]<0)
{
void dfsLast(int u, bool side) matched[v]=u;
{ return true;
if(visited[side][u]) return; }
visited[side][u]=true; }
FOR(j,0,bp[u].size())
if(!side) {
{ int v=bp[u][j];
for(int i=0; i<n; i++) // Now we go deeper and call trymatch
{ if(trymatch(matched[v]))
if(graph[u][i] && matched[u]!=i) {
dfsLast(i,1-side); matched[v]=u;
} return true;
} }
else dfsLast(matched[u],1-side); }
} return false;
}
void findMinVertexCover()
{
FOR(i,0,n)
{ 5.27 Maximum Matching in General Graphs (Randomized
if(revmatch[i]==-1) Algorithm)
{
dfsLast(i,0);
#include <time.h>
}
#define MAX 1010
}
bool adj[MAX][MAX];
// Assuming both sides have n nodes
int n, ar[MAX][MAX];
vi mvc, mis; // min vertex cover, max independent set
const int MOD = 1073750017;
FOR(i,0,n)
int expo(long long x, int n){
{
long long res = 1;
if(!visited[0][i] || visited[1][i]) mvc.pb(i);
if(!(!visited[0][i] || visited[1][i])) mis.pb(i);
while (n){
}
if (n & 1) res = (res * x) % MOD;
85

x = (x * x) % MOD; unsigned int x = (rand() << 15) ^ rand();


n >>= 1; x = (x % (MOD - 1)) + 1;
} ar[i][j] = x, ar[j][i] = MOD - x;
}
return (res % MOD); }
} }
int rank(int n){ /// hash = 646599
long long inv; return (rank(n) >> 1);
int i, j, k, u, v, x, r = 0, T[MAX]; }
int main(){
for (j = 0; j < n; j++){ int T = 0, t, m, i, j, a, b;
for (k = r; k < n && !ar[k][j]; k++){}
if (k == n) continue; scanf("%d", &t);
while (t--){
inv = expo(ar[k][j], MOD - 2); clr(adj);
for (i = 0; i < n; i++){ scanf("%d %d", &n, &m);
x = ar[k][i]; while (m--){
ar[k][i] = ar[r][i]; scanf("%d %d", &a, &b);
ar[r][i] = (inv * x) % MOD; a--, b--;
} adj[a][b] = adj[b][a] = true;
}
for (u = r + 1; u < n; u++){
if (ar[u][j]){ printf("Case %d: %d\n", ++T, tutte(n));
for (v = j + 1; v < n; v++){ }
if (ar[r][v]){ return 0;
ar[u][v] = ar[u][v] - (((long long)ar[r][v] * }
ar[u][j]) % MOD);
if (ar[u][v] < 0) ar[u][v] += MOD;
}
} 5.28 Min Cost Arborescence
}
}
// Min Cost Arboroscense class in C++
r++;
// Directed MST
}
// dir_mst returns the cost O(EV)?
return r;
struct Edge {
}
int u, v;
int tutte(int n){
ll dist;
int i, j;
int kbps;
srand(time(0));
};
clr(ar);
struct MinCostArborescence{
for (i = 0; i < n; i++){
int n, m;
for (j = i + 1; j < n; j++){
Edge allEdges[MAX];
if (adj[i][j]){
int done[62], prev[62], id[62];
86

ll in[62]; while (done[v] != i && id[v] == -1 && v != root) {


done[v] = i;
void init(int n) v = prev[v];
{ }
this->n = n; if (v != root && id[v] == -1) {
m = 0; for (int u = prev[v]; u != v; u = prev[u])
} id[u] = cnt;
id[v] = cnt++;
void add_Edge(int u, int v, ll dist) }
{ }
allEdges[m++] = {u,v,dist,0}; if (cnt == 0) break;
} for (int i = 0; i < n; i++)
if (id[i] == -1) id[i] = cnt++;
void add_Edge(Edge e) for (int i = 0; i < m; i++) {
{ int v = allEdges[i].v;
allEdges[m++] = e; allEdges[i].u = id[allEdges[i].u];
} allEdges[i].v = id[allEdges[i].v];
if (allEdges[i].u != allEdges[i].v)
ll dir_mst(int root) { allEdges[i].dist -= in[v];
ll ans = 0; }
while (true) { n = cnt;
for (int i = 0; i < n; i++) in[i] = INF; root = id[root];
for (int i = 0; i < m; i++) { }
int u = allEdges[i].u; return ans;
int v = allEdges[i].v; }
if (allEdges[i].dist < in[v] && u != v) { } Arboroscense;
in[v] = allEdges[i].dist;
prev[v] = u;
}
} 5.29 Min Cost Max Flow 1
for (int i = 0; i < n; i++) {
if (i == root) continue;
//
if (in[i] == INF) return -1;
// Minimum Cost Maximum Flow (Tomizawa, Edmonds-Karp’s successive
}
shortest path)
//
int cnt = 0;
// Description:
memset(id, -1, sizeof(id));
// Given a directed graph G = (V,E) with nonnegative capacity c and
memset(done, -1, sizeof(done));
cost w.
in[root] = 0;
// The algorithm find a maximum s-t flow of G with minimum cost.
//
for (int i = 0; i < n; i++)
// Algorithm:
{
// Tomizawa (1971), and Edmonds and Karp (1972)’s
ans += in[i];
// successive shortest path algorithm,
int v = i;
// which is also known as the primal-dual method.
87

// for (int iter = 0; ; ++iter) {


// Complexity: vector<int> prev(n, -1); prev[s] = 0;
// O(F m log n), where F is the amount of maximum flow. vector<cost_type> dist(n, INF); dist[s] = 0;
if (iter == 0) { // use Bellman-Ford to remove negative cost edges
vector<int> count(n); count[s] = 1;
// Caution: Probably does not support Negative Costs queue<int> que;
// Negative cost is supported in an implementation named: for (que.push(s); !que.empty(); ) {
mincostmaxflow2.cpp int u = que.front(); que.pop();
count[u] = -count[u];
for (auto &e: adj[u]) {
#define fst first if (e.capacity > e.flow && dist[e.dst] > dist[e.src] +
#define snd second rcost(e)) {
#define all(c) ((c).begin()), ((c).end()) dist[e.dst] = dist[e.src] + rcost(e);
#define TEST(s) if (!(s)) { cout << __LINE__ << " " << #s << endl; prev[e.dst] = e.rev;
exit(-1); } if (count[e.dst] <= 0) {
count[e.dst] = -count[e.dst] + 1;
const long long INF = 1e9; que.push(e.dst);
struct graph { }
typedef int flow_type; }
typedef int cost_type; }
struct edge { }
int src, dst; } else { // use Dijkstra
flow_type capacity, flow; typedef pair<cost_type, int> node;
cost_type cost; priority_queue<node, vector<node>, greater<node>> que;
size_t rev; que.push({0, s});
}; while (!que.empty()) {
vector<edge> edges; node a = que.top(); que.pop();
void add_edge(int src, int dst, flow_type cap, cost_type cost) { if (a.snd == t) break;
adj[src].push_back({src, dst, cap, 0, cost, adj[dst].size()}); if (dist[a.snd] > a.fst) continue;
adj[dst].push_back({dst, src, 0, 0, -cost, adj[src].size()-1}); for (auto e: adj[a.snd]) {
} if (e.capacity > e.flow && dist[e.dst] > a.fst + rcost(e)) {
int n; dist[e.dst] = dist[e.src] + rcost(e);
vector<vector<edge>> adj; prev[e.dst] = e.rev;
graph(int n) : n(n), adj(n) { } que.push({dist[e.dst], e.dst});
}
pair<flow_type, cost_type> min_cost_max_flow(int s, int t) { }
flow_type flow = 0; }
cost_type cost = 0; }
if (prev[t] == -1) break;
for (int u = 0; u < n; ++u) // initialize
for (auto &e: adj[u]) e.flow = 0; for (int u = 0; u < n; ++u)
if (dist[u] < dist[t]) p[u] += dist[u] - dist[t];
vector<cost_type> p(n, 0);
function<flow_type(int,flow_type)> augment = [&](int u, flow_type
auto rcost = [&](edge e) { return e.cost + p[e.src] - p[e.dst]; }; cur) {
88

if (u == s) return cur; graph.assign(n, vector<int> ());


edge &r = adj[u][prev[u]], &e = adj[r.dst][r.rev]; }
flow_type f = augment(e.src, min(e.capacity - e.flow, cur));
e.flow += f; r.flow -= f; void addEdge(int u, int v, long long cap, long long cost, bool
return f; directed = true){
}; graph[u].push_back(e.size());
flow_type f = augment(t, INF); e.push_back(Edge(u, v, cap, cost));
flow += f;
cost += f * (p[t] - p[s]); graph[v].push_back(e.size());
} e.push_back(Edge(v, u, 0, -cost));
return {flow, cost};
} if(!directed)
}; addEdge(v, u, cap, cost, true);
}

pair<long long, long long> getMinCostFlow(int _s, int _t){


5.30 Min Cost Max Flow 2 s = _s; t = _t;
flow = 0, cost = 0;
// By zscoder
while(SPFA()){
// From problem: CF Anti Palindromize - 884F
flow += sendFlow(t, 1LL<<62);
// Thank you ZS.
}
// Works as max-cost-max-flow if the costs are considered negative
// Slower due to SPFA in some cases?
return make_pair(flow, cost);
}
struct Edge{
int u, v;
// not sure about negative cycle
long long cap, cost;
bool SPFA(){
parent.assign(n, -1);
Edge(int _u, int _v, long long _cap, long long _cost){
dist.assign(n, 1LL<<62); dist[s] = 0;
u = _u; v = _v; cap = _cap; cost = _cost;
vector<int> queuetime(n, 0); queuetime[s] = 1;
}
vector<bool> inqueue(n, 0); inqueue[s] = true;
};
queue<int> q; q.push(s);
bool negativecycle = false;
struct MinCostFlow{
int n, s, t;
long long flow, cost;
while(!q.empty() && !negativecycle){
vector<vector<int> > graph;
int u = q.front(); q.pop(); inqueue[u] = false;
vector<Edge> e;
// if cost is double, dist should be double
for(int i = 0; i < graph[u].size(); i++){
vector<long long> dist;
int eIdx = graph[u][i];
vector<int> parent;
int v = e[eIdx].v; ll w = e[eIdx].cost, cap = e[eIdx].cap;
MinCostFlow(int _n){
if(dist[u] + w < dist[v] && cap > 0){
// 0-based indexing
dist[v] = dist[u] + w;
n = _n;
89

parent[v] = eIdx;
int cap[MAX], flow[MAX], cost[MAX], dis[MAX];
if(!inqueue[v]){ int n, m, s, t, Q[10000010], adj[MAX], link[MAX], last[MAX],
q.push(v); from[MAX], visited[MAX];
queuetime[v]++;
inqueue[v] = true; void init(int nodes, int source, int sink){
m = 0, n = nodes, s = source, t = sink;
if(queuetime[v] == n+2){ for (int i = 0; i <= n; i++) last[i] = -1;
negativecycle = true; }
break;
} void addEdge(int u, int v, int c, int w){
} adj[m] = v, cap[m] = c, flow[m] = 0, cost[m] = +w, link[m] =
} last[u], last[u] = m++;
} adj[m] = u, cap[m] = 0, flow[m] = 0, cost[m] = -w, link[m] =
} last[v], last[v] = m++;
}
return dist[t] != (1LL<<62);
} bool spfa(){
int i, j, x, f = 0, l = 0;
long long sendFlow(int v, long long curFlow){ for (i = 0; i <= n; i++) visited[i] = 0, dis[i] = INF;
if(parent[v] == -1)
return curFlow; dis[s] = 0, Q[l++] = s;
int eIdx = parent[v]; while (f < l){
int u = e[eIdx].u; ll w = e[eIdx].cost; i = Q[f++];
for (j = last[i]; j != -1; j = link[j]){
long long f = sendFlow(u, min(curFlow, e[eIdx].cap)); if (flow[j] < cap[j]){
x = adj[j];
cost += f*w; if (dis[x] > dis[i] + cost[j]){
e[eIdx].cap -= f; dis[x] = dis[i] + cost[j], from[x] = j;
e[eIdx^1].cap += f; if (!visited[x]){
visited[x] = 1;
return f; if (f && rand() & 7) Q[--f] = x;
} else Q[l++] = x;
}; }
}
}
}
5.31 Min Cost Max Flow 3 visited[i] = 0;
}
return (dis[t] != INF);
// This gave AC for CF 813D Two Melodies but the other one was TLE
}
// By sgtlaugh
// we can return all the flow values for each edge from this function
// flow[i] contains the amount of flow in i-th edge
// vi solve()
namespace mcmf{
pair <int, int> solve(){
const int MAX = 1000010;
int i, j;
const int INF = 1 << 25;
90

int mincost = 0, maxflow = 0; void addB(Index i, Index j, Flow capacity = InfCapacity, Cost cost
= Cost()) {
while (spfa()){ add(i, j, capacity, cost);
int aug = INF; add(j, i, capacity, cost);
for (i = t, j = from[i]; i != s; i = adj[j ^ 1], j = from[i]){ }
aug = min(aug, cap[j] - flow[j]); pair<Cost, Flow> minimumCostMaximumFlow(Index s, Index t, Flow f =
} InfCapacity,
for (i = t, j = from[i]; i != s; i = adj[j ^ 1], j = from[i]){ bool bellmanFord = false) {
flow[j] += aug, flow[j ^ 1] -= aug; int n = g.size();
} vector<Cost> dist(n); vector<Index> prev(n); vector<Index>
maxflow += aug, mincost += aug * dis[t]; prevEdge(n);
} pair<Cost, Flow> total = make_pair(0, 0);
// edges are indexed from 0 to m vector<Cost> potential(n);
// vi ret(flow,flow+m) while (f > 0) {
// to find flow of a specific edge, we just noticed that flow[2*i] fill(dist.begin(), dist.end(), InfCost);
contains if (bellmanFord || total.second == 0) {
// the flow amount in i-th edge dist[s] = 0;
return make_pair(mincost, maxflow); for (int k = 0; k < n; k++) {
} bool update = false;
} for (int i = 0; i < n; i++)
if (dist[i] != InfCost)
for (Index ei = 0; ei
<
(Index)g[i].size();
5.32 Min Cost Max Flow with Bellman Ford ei ++) {
const Edge &e =
g[i][ei];
const int InfCost = 1e9;
if (e.capacity
<= 0)
struct MinimumCostMaximumFlow {
continue;
typedef int Index; typedef int Flow; typedef int Cost;
Index j = e.to;
static const Flow InfCapacity = 1e9;
Cost d =
struct Edge {
dist[i] +
Index to; Index rev;
e.cost;
Flow capacity; Cost cost;
if (dist[j] > d
};
) {
vector<vector<Edge> > g;
dist[j]
void init(Index n) { g.assign(n, vector<Edge>()); }
= d;
void add(Index i, Index j, Flow capacity = InfCapacity, Cost cost
prev[j]
= Cost()) {
= i;
Edge e, f; e.to = j, f.to = i; e.capacity = capacity,
prevEdge[j]
f.capacity = 0; e.cost = cost, f.cost = -cost;
= ei;
g[i].push_back(e); g[j].push_back(f);
update =
g[i].back().rev = (Index)g[j].size() - 1; g[j].back().rev
true;
= (Index)g[i].size() - 1;
}
}
91

} }
if (!update) break; }
} return total;
} else { }
vector<bool> vis(n); } network;
priority_queue<pair<Cost, Index> > q;
q.push(make_pair(-0, s)); dist[s] = 0;
while (!q.empty()) {
Index i = q.top().second; q.pop(); 5.33 Minimum Path Cover in DAG
if (vis[i]) continue;
vis[i] = true;
#include <bits/stdtr1c++.h>
for (Index ei = 0; ei <
(Index)g[i].size(); ei ++) {
#define MAX 505
const Edge &e = g[i][ei];
#define clr(ar) memset(ar, 0, sizeof(ar))
if (e.capacity <= 0) continue;
#define read() freopen("lol.txt", "r", stdin)
Index j = e.to; Cost d =
#define dbg(x) cout << #x << " = " << x << endl
dist[i] + e.cost +
#define ran(a, b) ((((rand() << 15) ^ rand()) % ((b) - (a) + 1)) + (a))
potential[i] -
potential[j];
using namespace std;
if (d < dist[i]) d = dist[i];
/// Minimum path cover/Maximum independent set in DAG
if (dist[j] > d) {
namespace dag{
dist[j] = d; prev[j] =
/// For transitive closure and minimum path cover with not
i; prevEdge[j] =
necessarily disjoint vertex
ei;
bool ar[MAX][MAX];
q.push(make_pair(-d,
j));
vector <int> adj[MAX];
}
bool visited[MAX], first_set[MAX], second_set[MAX];
}
int n, L[MAX], R[MAX], D[MAX], Q[MAX], dis[MAX], parent[MAX];
}
}
inline void init(int nodes){ /// Number of vertices in DAG
if (dist[t] == InfCost) break;
n = nodes;
if (!bellmanFord) for (Index i = 0; i < n; i ++)
for (int i = 0; i < MAX; i++) adj[i].clear();
potential[i] += dist[i];
}
Flow d = f; Cost distt = 0;
for (Index v = t; v != s; ) {
inline void add_edge(int u, int v){ /// 0 based index, directed edge
Index u = prev[v]; const Edge &e =
of DAG
g[u][prevEdge[v]];
adj[u].push_back(v);
d = min(d, e.capacity); distt += e.cost; v =
}
u;
}
bool dfs(int i){
f -= d; total.first += d * distt; total.second += d;
int len = adj[i].size();
for (Index v = t; v != s; v = prev[v]) {
for (int j = 0; j < len; j++){
Edge &e = g[prev[v]][prevEdge[v]];
int x = adj[i][j];
e.capacity -= d; g[e.to][e.rev].capacity +=
if (L[x] == -1 || (parent[L[x]] == i)){
d;
if (L[x] == -1 || dfs(L[x])){
92

L[x] = i, R[i] = x; }
return true; }
}
} void transitive_closure(){ /// Transitive closure in O(n * m)
} clr(ar);
return false; int i, j, k, l;
} for (i = 0; i < n; i++){
l = adj[i].size();
bool bfs(){ for (j = 0; j < l; j++){
clr(visited); ar[i][adj[i][j]] = true;
int i, j, x, d, f = 0, l = 0; }
adj[i].clear();
for (i = 0; i < n; i++){ }
if (R[i] == -1){
visited[i] = true; for (k = 0; k < n; k++){
Q[l++] = i, dis[i] = 0; for (i = 0; i < n; i++){
} if (ar[i][k]){
} for (j = 0; j < n; j++){
if (ar[k][j]) ar[i][j] = true;
while (f < l){ }
i = Q[f++]; }
int len = adj[i].size(); }
for (j = 0; j < len; j++){ }
x = adj[i][j], d = L[x];
if (d == -1) return true; for (i = 0; i < n; i++){
for (j = 0; j < n; j++){
else if (!visited[d]){ if (i != j && ar[i][j]){
Q[l++] = d; adj[i].push_back(j);
parent[d] = i, visited[d] = true, dis[d] = dis[i] + 1; }
} }
} }
} }
return false; /// Minimum vertex disjoint path cover in DAG. Handle isolated
} vertices appropriately
int minimum_disjoint_path_cover() {
void get_path(int i){ int i, res = 0;
first_set[i] = true; memset(L, -1, sizeof(L));
int j, x, len = adj[i].size(); memset(R, -1, sizeof(R));

for (j = 0; j < len; j++){ while (bfs()){


x = adj[i][j]; for (i = 0; i < n; i++){
if (!second_set[x] && L[x] != -1){ if (R[i] == -1 && dfs(i)) res++;
second_set[x] = true; }
get_path(L[x]); }
}
93

return n - res; }
} }

int minimum_path_cover(){ /// Minimum path cover in DAG. Handle


isolated vertices appropriately
transitive_closure(); 5.34 Prim MST
return minimum_disjoint_path_cover();
}
vector <ll> graph[10003], cost[10003];
/// Minimum vertex cover of DAG, equal to maximum bipartite matching
bool visited[10003];
vector <int> minimum_vertex_cover(){
ll d[10003];
int i, res = 0;
int n, m;
memset(L, -1, sizeof(L));
memset(R, -1, sizeof(R));
int minKey()
{
while (bfs()){
ll mini=INF;
for (i = 0; i < n; i++){
int minidx;
if (R[i] == -1 && dfs(i)) res++;
for (int i=1; i<=n; i++)
}
{
}
if (!visited[i] && d[i]<mini)
mini=d[i], minidx=i;
vector <int> v;
}
clr(first_set), clr(second_set);
return minidx;
for (i = 0; i < n; i++){
}
if (R[i] == -1) get_path(i);
}
ll Prim()
{
for (i = 0; i < n; i++){
FOR(i,0,10003)
if (!first_set[i] || second_set[i]) v.push_back(i);
{
}
d[i]=INF;
visited[i]=false;
return v;
}
}
d[1]=0;
/// Maximum independent set of DAG, all vertices not in minimum
for (int i=1; i<=n-1; i++)
vertex cover
{
vector <int> maximum_independent_set() {
int u=minKey();
vector <int> v = minimum_vertex_cover();
visited[u]=true;
clr(visited);
FOR(j,0,graph[u].size())
int i, len = v.size();
{
for (i = 0; i < len; i++) visited[v[i]] = true;
int v=graph[u][j];
if(!visited[v] && cost[u][j]<d[v])
vector <int> res;
d[v]=cost[u][j];
for (i = 0; i < n; i++){
}
if (!visited[i]) res.push_back(i);
}
}
return res;
ll ret=0;
94

FOR(j,1,n+1) Output:
{ - maximum flow value
// cout<<d[j]<<endl;
if(d[j]!=INF) Todo:
ret+=d[j]; - implement Phase II (flow network from preflow network)
} - implement GetMinCut()
return ret; */
}
// To obtain the actual flow values, look at all edges with capacity > 0
int main() // Zero capacity edges are residual edges
{
int a, b, c; template <class T> struct Edge {
scanf("%d%d", &n, &m); int from, to, index;
FOR(i,0,m) T cap, flow;
{
scanf("%d%d%d", &a, &b, &c); Edge(int from, int to, T cap, T flow, int index): from(from), to(to),
graph[a].pb(b); cap(cap), flow(flow), index(index) {}
graph[b].pb(a); };
cost[a].pb(c);
cost[b].pb(c); template <class T> struct PushRelabel {
} int n;
cout<<Prim()<<endl; vector <vector <Edge <T>>> adj;
vector <T> excess;
return 0; vector <int> dist, count;
} vector <bool> active;
vector <vector <int>> B;
int b;
queue <int> Q;
5.35 Push Relabel 2
PushRelabel (int n): n(n), adj(n) {}
/*
void AddEdge (int from, int to, int cap) {
Implementation of highest-label push-relabel maximum flow
adj[from].push_back(Edge <T>(from, to, cap, 0, adj[to].size()));
with gap relabeling heuristic.
if (from == to) {
adj[from].back().index++;
Running time:
}
O(|V|^2|E|^{1/2})
adj[to].push_back(Edge <T>(to, from, 0, 0, adj[from].size() - 1));
Usage:
}
- add edges by AddEdge()
- GetMaxFlow(s, t) returns the maximum flow from s to t
void Enqueue (int v) {
if (!active[v] && excess[v] > 0 && dist[v] < n) {
Input:
active[v] = true;
- graph, constructed using AddEdge()
B[dist[v]].push_back(v);
- (s, t), (source, sink)
b = max(b, dist[v]);
95

} } else {
} Relabel(v);
}
void Push (Edge <T> &e) { }
T amt = min(excess[e.from], e.cap - e.flow); }
if (dist[e.from] == dist[e.to] + 1 && amt > T(0)) {
e.flow += amt; T GetMaxFlow (int s, int t) {
adj[e.to][e.index].flow -= amt; dist = vector <int>(n, 0), excess = vector<T>(n, 0), count =
excess[e.to] += amt; vector <int>(n + 1, 0), active = vector <bool>(n, false), B =
excess[e.from] -= amt; vector <vector <int>>(n), b = 0;
Enqueue(e.to);
} for (auto &e: adj[s]) {
} excess[s] += e.cap;
}
void Gap (int k) {
for (int v = 0; v < n; v++) if (dist[v] >= k) { count[0] = n;
count[dist[v]]--; Enqueue(s);
dist[v] = max(dist[v], n); active[t] = true;
count[dist[v]]++;
Enqueue(v); while (b >= 0) {
} if (!B[b].empty()) {
} int v = B[b].back();
B[b].pop_back();
void Relabel (int v) { active[v] = false;
count[dist[v]]--; Discharge(v);
dist[v] = n; } else {
for (auto e: adj[v]) if (e.cap - e.flow > 0) { b--;
dist[v] = min(dist[v], dist[e.to] + 1); }
} }
count[dist[v]]++; return excess[t];
Enqueue(v); }
}
T GetMinCut (int s, int t, vector <int> &cut);
void Discharge(int v) { };
for (auto &e: adj[v]) {
if (excess[v] > 0) {
Push(e);
} else { 5.36 Push Relabel
break;
}
#define sz(x) (int)(x).size()
}
struct Edge {
if (excess[v] > 0) {
int v;
if (count[dist[v]] == 1) {
ll flow, C;
Gap(dist[v]);
int rev;
96

}; count[dist[v]] ++;
enqueue(v);
template <int SZ> struct PushRelabel { }
vector<Edge> adj[SZ];
ll excess[SZ]; void discharge(int v) {
int dist[SZ], count[SZ+1], b = 0; for (auto &e: adj[v]) {
bool active[SZ]; if (excess[v] > 0) push(v,e);
vi B[SZ]; else break;
}
void addEdge(int u, int v, ll C) { if (excess[v] > 0) {
Edge a{v, 0, C, sz(adj[v])}; if (count[dist[v]] == 1) gap(dist[v]);
Edge b{u, 0, 0, sz(adj[u])}; else relabel(v);
adj[u].pb(a), adj[v].pb(b); }
} }

void enqueue (int v) { ll maxFlow (int s, int t) {


if (!active[v] && excess[v] > 0 && dist[v] < SZ) { for (auto &e: adj[s]) excess[s] += e.C;
active[v] = 1;
B[dist[v]].pb(v); count[0] = SZ;
b = max(b, dist[v]); enqueue(s); active[t] = 1;
}
} while (b >= 0) {
if (sz(B[b])) {
void push (int v, Edge &e) { int v = B[b].back(); B[b].pop_back();
ll amt = min(excess[v], e.C-e.flow); active[v] = 0; discharge(v);
if (dist[v] == dist[e.v]+1 && amt > 0) { } else b--;
e.flow += amt, adj[e.v][e.rev].flow -= amt; }
excess[e.v] += amt, excess[v] -= amt; return excess[t];
enqueue(e.v); }
} };
}
PushRelabel<50000> network;
void gap (int k) {
FOR(v,1,SZ+1) if (dist[v] >= k) {
count[dist[v]] --;
dist[v] = SZ; 5.37 SCC Kosaraju
count[dist[v]] ++;
enqueue(v);
}
// Kosaraju’s strongly connected component
}
//
// Description:
void relabel (int v) {
// For a graph G = (V, E), u and v are strongly connected if
count[dist[v]] --; dist[v] = SZ;
// there are paths u -> v and v -> u. This defines an equivalent
for (auto e: adj[v]) if (e.C > e.flow) dist[v] = min(dist[v],
// relation, and its equivalent class is called a strongly
dist[e.v] + 1);
// connected component.
97

//
// Algorithm: vector<vector<int>> strongly_connected_components() { // kosaraju
// Kosaraju’s algorithm performs DFS on G and rev(G). vector<int> ord, visited(n);
// First DFS finds topological ordering of SCCs, and vector<vector<int>> scc;
// the second DFS extracts components. function<void(int, vector<vector<int>>&, vector<int>&)> dfs
// = [&](int u, vector<vector<int>> &adj, vector<int> &out) {
// Complexity: visited[u] = true;
// O(n + m) for (int v : adj[u])
// if (!visited[v]) dfs(v, adj, out);
// Verified: out.push_back(u);
// SPOJ 6818 };
// for (int u = 0; u < n; ++u)
// References: if (!visited[u]) dfs(u, adj, ord);
// A. V. Aho, J. E. Hopcroft, and J. D. Ullman (1983): fill(all(visited), false);
// Data Structures and Algorithms, for (int i = n - 1; i >= 0; --i)
// Addison-Wesley. if (!visited[ord[i]])
// scc.push_back({}), dfs(ord[i], rdj, scc.back());
#include <iostream> return scc;
#include <vector> }
#include <cstdio> };
#include <cstdlib>
#include <map> int main() {
#include <set> int n, m;
#include <cmath> scanf("%d %d", &n, &m);
#include <cstring> graph g(n);
#include <functional> for (int k = 0; k < m; ++k) {
#include <algorithm> int i, j;
#include <unordered_map> scanf("%d %d", &i, &j);
#include <unordered_set> g.add_edge(i - 1, j - 1);
}
using namespace std;
vector<vector<int>> scc = g.strongly_connected_components();
#define fst first vector<int> outdeg(scc.size());
#define snd second vector<int> id(n);
#define all(c) ((c).begin()), ((c).end()) for (int i = 0; i < scc.size(); ++i)
for (int u : scc[i]) id[u] = i;
for (int u = 0; u < n; ++u)
struct graph { for (int v : g.adj[u])
int n; if (id[u] != id[v]) ++outdeg[id[u]];
vector<vector<int>> adj, rdj;
graph(int n) : n(n), adj(n), rdj(n) { } if (count(all(outdeg), 0) != 1) {
void add_edge(int src, int dst) { printf("0\n");
adj[src].push_back(dst); } else {
rdj[dst].push_back(src); int i = find(all(outdeg), 0) - outdeg.begin();
} sort(all(scc[i]));
98

printf("%d\n%d", scc[i].size(), scc[i][0] + 1); scc.push_back(vector<int>());


for (int j = 1; j < scc[i].size(); ++j) while(st.top() != u)
printf(" %d", scc[i][j] + 1); {
printf("\n"); scc[scc.size() - 1].push_back(st.top());
} in_stack[st.top()] = false;
} st.pop();
}

scc[scc.size() - 1].push_back(u);
5.38 SCC Tarjan in_stack[u] = false;
st.pop();
}
stack<int> st;
}
vector<vector<int> > scc;
int low[MAX], disc[MAX], comp[MAX];
int tarjan()
int dfs_time;
{
bool in_stack[MAX];
memset(comp, -1, sizeof(comp));
memset(disc, -1, sizeof(disc));
vi graph[MAX];
memset(low, -1, sizeof(low));
int n; // node count indexed from 1
memset(in_stack, 0, sizeof(in_stack));
dfs_time = 0;
void dfs(int u)
{
while(!st.empty())
low[u] = dfs_time;
st.pop();
disc[u] = dfs_time;
dfs_time++;
for(int i = 1; i <= n; i++)
if(disc[i] == -1)
in_stack[u] = true;
dfs(i);
st.push(u);
int sz = scc.size();
int sz = graph[u].size(), v;
for(int i = 0; i < sz; i++)
for(int i = 0; i < sz; i++)
for(int j = 0; j < (int)scc[i].size(); j++)
{
comp[scc[i][j]] = i;
v = graph[u][i];
return sz;
if(disc[v] == -1)
}
{
dfs(v);
low[u] = min(low[u], low[v]);
}
else if(in_stack[v] == true)
5.39 SPFA
low[u] = min(low[u], disc[v]);
} int dist[MAX], inq[MAX];
void spfa(int source)
if(low[u] == disc[u]) {
{ FOR(i,1,n+1) inq[i]=false, dist[i]=inf; // or INF
99

dist[source]=0; }
queue<int> Q; void cleanup(vi &vtx)
Q.push(source); {
inq[source]=true; for(auto it: vtx)
{
while(!Q.empty()) tree[it].clear();
{ }
int u=Q.front(); }
Q.pop(); bool isancestor(int u, int v) // Check if u is an ancestor of v
FOR(j,0,graph[u].size()) {
{ return (tin[u]<=tin[v]) && (tout[v]<=tout[u]);
int v=graph[u][j]; }
// building the auxiliary tree. Nodes are in vtx
if(dist[u]+cost[u][j]<dist[v]) void sortbyEntry(vi &vtx)
{ {
dist[v]=dist[u]+cost[u][j]; // Sort by entry time
if(!inq[v]) sort(begin(vtx), end(vtx), [](int x, int y){
{ return tin[x]<tin[y];
Q.push(v); });
inq[v]=true; }
} void release(vi &vtx)
} {
} // removing duplicated nodes
inq[u]=false; SORT(vtx);
} vtx.erase(unique(begin(vtx),end(vtx)),end(vtx));
} }
void buildTree(vi &vtx)
{
stack<int> st;
5.40 Tree Construction with Specific Vertices st.push(vtx[0]);
FOR(i,1,vtx.size())
{
/* This code builds an auxiliary tree from the given vertices to do
while(!isancestor(st.top(),vtx[i]))
further operations. Example problem: CF 613D */
st.pop();
tree[st.top()].pb(vtx[i]);
void dfs(int u, int p=0, int d=0)
st.push(vtx[i]);
{
}
tin[u]=++t;
}
parent[u][0]=p;
int work(vi &vtx)
level[u]=d;
{
for(auto v: graph[u])
sortbyEntry(vtx);
{
int sz=vtx.size();
if(v==p) continue;
// Finding all the ancestors, there are few of them
dfs(v,u,d+1);
FOR(i,0,sz-1)
}
{
tout[u]=t;
100

int anc=query(vtx[i],vtx[i+1]); pii t=Q.top(); Q.pop();


vtx.pb(anc); int u=t.first, costU=-t.second;
} // Since the actual cost was negated.
release(vtx);
sortbyEntry(vtx); FOR(j,0,Graph[u].size())
buildTree(vtx); {
// Do necessary operation on the built auxiliary tree int v=Graph[u][j];
cleanup(vtx);
// return result // prnt(v); prnt(d[v].size());
}
// Have we already got k shortest paths? Or is the
longest path can be made better?
if(d[v].size()<k || d[v].top()>costU+Cost[u][j])
5.41 kth Shortest Path Length {
int temp=costU+Cost[u][j];
d[v].push(temp);
int n, m, x, y, k, a, b, c;
Q.push(MP(v,-temp));
vi Graph[103], Cost[103];
}
vector<priority_queue<int> > d(103);
if(d[v].size()>k) d[v].pop();
priority_queue < pii > Q;
// If we have more than k shortest path for the current node, we
can pop
void goDijkstra()
// the worst ones.
{
}
}
// Here, elements are sorted in decreasing order of the first
elements
if(d[y].size()<k) prnt(-1);
// of the pairs and then the second elements if equal first
// We have not found k shortest path for our destination.
element.
else prnt(d[y].top());
}
// d[i] is the priority_queue of the node i where the best k path
length
int main()
// will be stored in decreasing order. So, d[i].top() has the
{
longest of the
// ios_base::sync_with_stdio(0);
// first k shortest path.
// cin.tie(NULL); cout.tie(NULL);
// freopen("in.txt","r",stdin);
d[x].push(0);
Q.push(MP(x,0));
while(scanf("%d%d", &n, &m) && n+m)
// Q contains the nodes in the increasing order of their cost
{
// Since the priority_queue sorts the pairs in decreasing order of
scanf("%d%d%d", &x, &y, &k);
their
// first element and then second element, to sort it in increasing
FOR(i,0,m)
order
{
// we will negate the cost and push it.
scanf("%d%d%d", &a, &b, &c);
while(!Q.empty())
Graph[a].pb(b);
{
101

Cost[a].pb(c); ret = chinese_remainder_theorem(ret.second, ret.first,


} x[i], a[i]);
if (ret.second == -1) break;
goDijkstra(); }
return ret;
FOR(i,0,103) Graph[i].clear(), Cost[i].clear(); }
FOR(i,0,103)
{ // computes x and y such that ax + by = c; on failure, x = y =-1
while(!d[i].empty()) d[i].pop(); void linear_diophantine(int a, int b, int c, int &x, int &y) {
} int d = gcd(a, b);
if (c % d) {
while(!Q.empty()) Q.pop(); x = y = -1;
} } else {
x = c / d * mod_inverse(a / d, b / d);
y = (c - a * x) / b;
return 0; }
} }

6.2 Euler Phi


6 Math
int phi[MAX];
void phi()
6.1 CRT Diophantine {
for (int i = 1; i < MAX; i++) phi[i] = i;
// Chinese remainder theorem (special case): find z such that for (int i = 2; i < MAX; i++)
// z % x = a, z % y = b. Here, z is unique modulo M = lcm(x,y). {
// Return (z,M). On failure, M = -1. if (phi[i] == i)
PII chinese_remainder_theorem(int x, int a, int y, int b) { {
int s, t; for (int j = i; j < MAX; j += i)
int d = extended_euclid(x, y, s, t); {
if (a % d != b % d) return make_pair(0, -1); phi[j] /= i;
return make_pair(mod(s * b * x + t * a * y, x * y) / d, x * y / d); phi[j] *= (i - 1);
} }
}
// Chinese remainder theorem: find z such that }
// z % x[i] = a[i] for all i. Note that the solution is }
// unique modulo M = lcm_i (x[i]). Return (z,M). On
// failure, M = -1. Note that we do not require the a[i]’s
// to be relatively prime.
6.3 FFT 1
PII chinese_remainder_theorem(const VI &x, const VI &a) {
PII ret = make_pair(a[0], x[0]); const int MAXN = (1 << 21); // May not need to be changed
for (int i = 1; i < x.size(); i++) {
102

struct complex_base bit_rev[i] = (bit_rev[i >> 1] >> 1) | ((i & 1) << (lg - 1));
{ if (bit_rev[i] < i) swap(a[i], a[bit_rev[i]]);
double x, y; }
complex_base(double _x = 0, double _y = 0) { x = _x; y = _y; }
friend complex_base operator-(const complex_base &a, const for (int len = 2; len <= n; len <<= 1)
complex_base &b) { return complex_base(a.x - b.x, a.y - b.y); } {
friend complex_base operator+(const complex_base &a, const double ang = -2 * PI / len;
complex_base &b) { return complex_base(a.x + b.x, a.y + b.y); } complex_base w(1, 0), wn(cos(ang), sin(ang));
friend complex_base operator*(const complex_base &a, const
complex_base &b) { return complex_base(a.x * b.x - a.y * b.y, a.y for (int j = 0; j < (len >> 1); j++, w = w * wn)
* b.x + b.y * a.x); } for (int i = 0; i < n; i += len)
friend void operator/=(complex_base &a, const double &P) { a.x /= P; {
a.y /= P; } complex_base u = a[i + j], v = w * a[i + j + (len >> 1)];
}; a[i + j] = u + v;
a[i + j + (len >> 1)] = u - v;
int bit_rev[MAXN]; }
}
void fft(complex_base *a, int lg)
{ for (int i = 0; i < n; i++)
int n = (1 << lg); a[i] /= n;
for (int i = 1; i < n; i++) }
{
bit_rev[i] = (bit_rev[i >> 1] >> 1) | ((i & 1) << (lg - 1)); complex_base A[MAXN], B[MAXN];
if (bit_rev[i] < i) swap(a[i], a[bit_rev[i]]);
} vector<ll> mult(vector<ll> a, vector<ll> b)
{
for (int len = 2; len <= n; len <<= 1) if (a.size() * b.size() <= 256)
{ {
double ang = 2 * PI / len; vector<ll> ans(a.size() + b.size(), 0);
complex_base w(1, 0), wn(cos(ang), sin(ang)); for (int i = 0; i < (int)a.size(); i++)
for (int j = 0; j < (len >> 1); j++, w = w * wn) for (int j = 0; j < (int)b.size(); j++)
for (int i = 0; i < n; i += len) ans[i + j] += a[i] * b[j];
{
complex_base u = a[i + j], v = w * a[i + j + (len >> 1)]; return ans;
a[i + j] = u + v; }
a[i + j + (len >> 1)] = u - v;
} int lg = 0; while ((1 << lg) < (a.size() + b.size())) ++lg;
} for (int i = 0; i < (1 << lg); i++) A[i] = B[i] = complex_base(0, 0);
} for (int i = 0; i < (int)a.size(); i++) A[i] = complex_base(a[i], 0);
for (int i = 0; i < (int)b.size(); i++) B[i] = complex_base(b[i], 0);
void inv_fft(complex_base *a, int lg)
{ fft(A, lg); fft(B, lg);
int n = (1 << lg); for (int i = 0; i < (1 << lg); i++)
for (int i = 1; i < n; i++) A[i] = A[i] * B[i];
{ inv_fft(A, lg);
103

*pu += t;
vector<ll> ans(a.size() + b.size(), 0); }
for (int i = 0; i < (int)ans.size(); i++) }
ans[i] = (int)(A[i].x + 0.5); }

return ans; if (invert) FOR(i, 0, n) a[i] /= n;


} }
void calcRev(int n, int logn) {
FOR(i, 0, n) {
rev[i] = 0;
6.4 FFT 2 FOR(j, 0, logn) if (i & (1 << j)) rev[i] |= 1 << (logn - 1
- j);
}
// Tested:
}
// - FBHC 2016 R3 - Problem E
void mulpoly(int a[], int b[], ll c[], int na, int nb, int &n) {
// - https://open.kattis.com/problems/polymul2 (need long double)
int l = max(na, nb), logn = 0;
// Note:
for (n = 1; n < l; n <<= 1) ++logn;
// - a[2] will have size <= 2*n
n <<= 1; ++logn;
// - When rounding, careful with negative numbers:
calcRev(n, logn);
int my_round(double x) {
if (x < 0) return -my_round(-x);
FOR(i, 0, n) fa[i] = fb[i] = cplex(0);
return (int) (x + 1e-3);
FOR(i, 0, na) fa[i] = cplex(a[i]);
}
FOR(i, 0, nb) fb[i] = cplex(b[i]);
const int N = 1 << 18;
typedef complex<long double> cplex; // may need long double
fft(fa, n, false);
int rev[N];
fft(fb, n, false);
cplex wlen_pw[N], fa[N], fb[N];
FOR(i, 0, n) fa[i] *= fb[i];
void fft(cplex a[], int n, bool invert) {
fft(fa, n, true);
for (int i = 0; i < n; ++i) if (i < rev[i]) swap (a[i], a[rev[i]]);
// if everything is double/long double, we don’t add 0.5
FOR(i, 0, n) c[i] = (ll)(fa[i].real() + 0.5);
for (int len = 2; len <= n; len <<= 1) {
}
double alpha = 2 * PI / len * (invert ? -1 : +1);
// call
int len2 = len >> 1;
mulpoly(first_poly,second_poly,output,size_first,size_second,size_output)
wlen_pw[0] = cplex(1, 0);
cplex wlen(cos(alpha), sin(alpha));
for (int i = 1; i < len2; ++i) wlen_pw[i] = wlen_pw[i - 1]
* wlen;
6.5 FFT Extended

for (int i = 0; i < n; i += len) { #include <bits/stdtr1c++.h>


cplex t, *pu = a + i, *pv = a + i + len2,
*pu_end = a + i + len2, *pw = wlen_pw; #define MAXN 1048576 /// 2 * MAX at least
for (; pu != pu_end; ++pu, ++pv, ++pw) { using namespace std;
t = *pv * *pw;
*pv = *pu - t; /// Change long double to double if not required
104

namespace fft { int lim = 1 << i;


int len, last = -1, step = 0, rev[MAXN]; for (int j = lim >> 1; j < lim; j++) {
long long C[MAXN], D[MAXN], P[MAXN], Q[MAXN]; dp[2 * j] = dp[j], inv[2 * j] = inv[j];
struct complx { inv[2 * j + 1] = inv[j] * inv_mul;
long double real, img; dp[2 * j + 1] = dp[j] * mul;
inline complx() { }
real = img = 0.0; }
} }
inline complx conjugate() {
return complx(real, -img); if (last != len) {
} last = len;
inline complx(long double x) { int bit = (32 - __builtin_clz(len) - (__builtin_popcount(len) ==
real = x, img = 0.0; 1));
} for (int i = 0; i < len; i++) rev[i] = (rev[i >> 1] >> 1) + ((i &
inline complx(long double x, long double y) { 1) << (bit - 1));
real = x, img = y; }
} }
inline complx operator + (complx other) { /// Fast Fourier Transformation, iterative divide and conquer
return complx(real + other.real, img + other.img); void transform(complx *in, complx *out, complx* ar) {
} for (int i = 0; i < len; i++) out[i] = in[rev[i]];
inline complx operator - (complx other) { for (int k = 1; k < len; k <<= 1) {
return complx(real - other.real, img - other.img); for (int i = 0; i < len; i += (k << 1)) {
} for (int j = 0; j < k; j++) {
inline complx operator * (complx other) { complx z = out[i + j + k] * ar[j + k];
return complx((real * other.real) - (img * other.img), (real * out[i + j + k] = out[i + j] - z;
other.img) + (img * other.real)); out[i + j] = out[i + j] + z;
} }
} u[MAXN], v[MAXN], f[MAXN], g[MAXN], dp[MAXN], inv[MAXN]; }
}
void build(int& a, long long* A, int& b, long long* B) { }
while (a > 1 && A[a - 1] == 0) a--; /// Fast Fourier Transformation, iterative divide and conquer unrolled
while (b > 1 && B[b - 1] == 0) b--; and optimized
void transform_unrolled(complx *in, complx *out, complx* ar) {
len = 1 << (32 - __builtin_clz(a + b) - (__builtin_popcount(a + b) == for (int i = 0; i < len; i++) out[i] = in[rev[i]];
1)); for (int k = 1; k < len; k <<= 1) {
for (int i = a; i < len; i++) A[i] = 0; for (int i = 0; i < len; i += (k << 1)) {
for (int i = b; i < len; i++) B[i] = 0; complx z, *a = out + i, *b = out + i + k, *c = ar + k;
if (k == 1) {
if (!step++) { z = (*b) * (*c);
dp[1] = inv[1] = complx(1); *b = *a - z, *a = *a + z;
for (int i = 1; (1 << i) < MAXN; i++) { }
double theta = (2.0 * acos(0.0)) / (1 << i);
complx mul = complx(cos(theta), sin(theta)); for (int j = 0; j < k && k > 1; j += 2, a++, b++, c++) {
complx inv_mul = complx(cos(-theta), sin(-theta)); z = (*b) * (*c);
105

*b = *a - z, *a = *a + z; int flag = equals(a, A, b, B);


a++, b++, c++; for (int i = 0; i < len; i++) A[i] %= mod, B[i] %= mod;
z = (*b) * (*c); for (int i = 0; i < len; i++) u[i] = complx(A[i] & 32767, A[i] >> 15);
*b = *a - z, *a = *a + z; for (int i = 0; i < len; i++) v[i] = complx(B[i] & 32767, B[i] >> 15);
}
} transform_unrolled(u, f, dp);
} for (int i = 0; i < len; i++) g[i] = f[i];
} if (!flag) transform_unrolled(v, g, dp);
bool equals(int a, long long* A, int b, long long* B) {
if (a != b) return false; for (int i = 0; i < len; i++) {
for (a = 0; a < b && A[a] == B[a]; a++) {} int j = (len - 1) & (len - i);
return (a == b); complx c1 = f[j].conjugate(), c2 = g[j].conjugate();
}
/// Square of a polynomial complx a1 = (f[i] + c1) * complx(0.5, 0);
int square(int a, long long* A) { complx a2 = (f[i] - c1) * complx(0, -0.5);
build(a, A, a, A); complx b1 = (g[i] + c2) * complx(0.5 / len, 0);
for (int i = 0; i < len; i++) u[i] = complx(A[i], 0); complx b2 = (g[i] - c2) * complx(0, -0.5 / len);
transform_unrolled(u, f, dp); v[j] = a1 * b2 + a2 * b1;
for (int i = 0; i < len; i++) u[i] = f[i] * f[i]; u[j] = a1 * b1 + a2 * b2 * complx(0, 1);
transform_unrolled(u, f, inv); }
for (int i = 0; i < len; i++) A[i] = (f[i].real / (long double)len) + transform_unrolled(u, f, dp);
0.5; transform_unrolled(v, g, dp);
return a + a - 1;
} long long x, y, z;
/// Multiplies two polynomials A and B and return the coefficients of for (int i = 0; i < len; i++) {
their product in A x = f[i].real + 0.5, y = g[i].real + 0.5, z = f[i].img + 0.5;
/// Function returns degree of the polynomial A * B A[i] = (x + ((y % mod) << 15) + ((z % mod) << 30)) % mod;
int multiply(int a, long long* A, int b, long long* B) { }
if (equals(a, A, b, B)) return square(a, A); /// Optimization return a + b - 1;
}
build(a, A, b, B); /// Multiplies two polynomials where intermediate and final values fits
for (int i = 0; i < len; i++) u[i] = complx(A[i], B[i]); in long long
transform_unrolled(u, f, dp); int long_multiply(int a, long long* A, int b, long long* B) {
for (int i = 0; i < len; i++) { int mod1 = 1.5e9;
int j = (len - 1) & (len - i); int mod2 = mod1 + 1;
u[i] = (f[j] * f[j] - f[i].conjugate() * f[i].conjugate()) * for (int i = 0; i < a; i++) C[i] = A[i];
complx(0, -0.25 / len); for (int i = 0; i < b; i++) D[i] = B[i];
}
transform_unrolled(u, f, dp); mod_multiply(a, A, b, B, mod1);
for (int i = 0; i < len; i++) A[i] = f[i].real + 0.5; mod_multiply(a, C, b, D, mod2);
return a + b - 1; for (int i = 0; i < len; i++) {
} A[i] = A[i] + (C[i] - A[i] + (long long)mod2) * (long long)mod1 %
/// Modular multiplication mod2 * mod1;
int mod_multiply(int a, long long* A, int b, long long* B, int mod) { }
build(a, A, b, B); return a + b - 1;
106

} /// pattern = "1001101001101110101101000"


int build_convolution(int n, long long* A, long long* B) { /// Sum of values in hamming distance vector = 321
int i, m, d = 0; vector <int> hamming_distance(const char* str, const char* pattern) {
for (i = 0; i < n; i++) Q[i] = Q[i + n] = B[i]; int n = strlen(str), m = strlen(pattern);
for (i = 0; i < n; i++) P[i] = A[i], P[i + n] = 0; for (int i = 0; i < n; i++) P[i] = Q[i] = 0;
n *= 2, m = 1 << (32 - __builtin_clz(n) - (__builtin_popcount(n) == for (int i = 0; i < n; i++) P[i] = str[i] == ’1’ ? 1 : -1;
1)); for (int i = 0, j = m - 1; j >= 0; i++, j--) Q[i] = pattern[j] == ’1’
for (i = n; i < m; i++) P[i] = Q[i] = 0; ? 1 : -1;
return n;
} vector <int> res;
/*** fft::multiply(n, P, m, Q);
Computes the circular convolution of A and B, denoted A * B, in C for (int i = 0; (i + m) <= n; i++) {
A and B must be of equal size, if not normalize before calling res.push_back(m - ((P[i + m - 1] + m) >> 1));
function }
Example to demonstrate convolution for n = 5: return res;
}
c0 = a0b0 + a1b4 + a2b3 + a3b2 + a4b1 }
c1 = a0b1 + a1b0 + a2b4 + a3b3 + a4b2
...
c4 = a0b4 + a1b3 + a2b2 + a3b1 + a4b0
Note: If linear convolution is required, pad with zeros 6.6 FFT Modulo
appropriately, as in multiplication
***/
// Caution: Got TLE in divide and conquer + FFT problem
/// Returns the convolution of A and B in A
void convolution(int n, long long* A, long long* B) {
template<class T, class T2> inline void chkmax(T &x, const T2 &y) { if (x
int len = build_convolution(n, A, B);
< y) x = y; }
multiply(len, P, len, Q);
template<class T, class T2> inline void chkmin(T &x, const T2 &y) { if (x
for (int i = 0; i < n; i++) A[i] = P[i + n];
> y) x = y; }
}
const int MAXN = (1 << 19);
/// Modular convolution
int mod=1009;
void mod_convolution(int n, long long* A, long long* B, int mod) {
int len = build_convolution(n, A, B);
inline void addmod(int& x, int y, int mod) { (x += y) >= mod && (x -=
mod_multiply(len, P, len, Q, mod);
mod); }
for (int i = 0; i < n; i++) A[i] = P[i + n];
inline int mulmod(int x, int y, int mod) { return x * 1ll * y % mod; }
}
/// Convolution in long long
struct complex_base
void long_convolution(int n, long long* A, long long* B) {
{
int len = build_convolution(n, A, B);
long double x, y;
long_multiply(len, P, len, Q);
complex_base(long double _x = 0, long double _y = 0) { x = _x; y =
for (int i = 0; i < n; i++) A[i] = P[i + n];
_y; }
}
friend complex_base operator-(const complex_base &a, const
/// Hamming distance vector of B with every substring of length |pattern|
complex_base &b) { return complex_base(a.x - b.x, a.y - b.y);
in str
}
/// str and pattern consists of only ’1’ and ’0’
friend complex_base operator+(const complex_base &a, const
/// str = "01111000010011111111110010001101000100011110101111"
complex_base &b) { return complex_base(a.x + b.x, a.y + b.y);
107

} for (int len = 2; len <= n; len <<= 1)


friend complex_base operator*(const complex_base &a, const {
complex_base &b) { return complex_base(a.x * b.x - a.y * b.y, long double ang = -2 * PI / len;
a.y * b.x + b.y * a.x); } complex_base w(1, 0), wn(cos(ang), sin(ang));
friend void operator/=(complex_base &a, const long double &P) {
a.x /= P; a.y /= P; } for (int j = 0; j < (len >> 1); j++, w = w * wn)
}; for (int i = 0; i < n; i += len)
{
int bit_rev[MAXN]; complex_base u = a[i + j], v = w * a[i + j +
(len >> 1)];
void fft(complex_base *a, int lg) a[i + j] = u + v;
{ a[i + j + (len >> 1)] = u - v;
int n = (1 << lg); }
for (int i = 1; i < n; i++) }
{
bit_rev[i] = (bit_rev[i >> 1] >> 1) | ((i & 1) << (lg - for (int i = 0; i < n; i++)
1)); a[i] /= n;
if (bit_rev[i] < i) swap(a[i], a[bit_rev[i]]); }
}
complex_base A[MAXN], B[MAXN];
for (int len = 2; len <= n; len <<= 1)
{ vector<int> mult(const vector<int> &a, const vector<int> &b)
long double ang = 2 * PI / len; {
complex_base w(1, 0), wn(cos(ang), sin(ang)); if (a.size() * b.size() <= 128)
for (int j = 0; j < (len >> 1); j++, w = w * wn) {
for (int i = 0; i < n; i += len) vector<int> ans(a.size() + b.size(), 0);
{ for (int i = 0; i < (int)a.size(); i++)
complex_base u = a[i + j], v = w * a[i + j + for (int j = 0; j < (int)b.size(); j++)
(len >> 1)]; ans[i + j] = (ans[i + j] + a[i] * 1ll *
a[i + j] = u + v; b[j]) % mod;
a[i + j + (len >> 1)] = u - v;
} return ans;
} }
}
int lg = 0; while ((1 << lg) < (a.size() + b.size())) ++lg;
void inv_fft(complex_base *a, int lg) for (int i = 0; i < (1 << lg); i++) A[i] = B[i] = complex_base(0,
{ 0);
int n = (1 << lg); for (int i = 0; i < (int)a.size(); i++) A[i] = complex_base(a[i],
for (int i = 1; i < n; i++) 0);
{ for (int i = 0; i < (int)b.size(); i++) B[i] = complex_base(b[i],
bit_rev[i] = (bit_rev[i >> 1] >> 1) | ((i & 1) << (lg - 0);
1));
if (bit_rev[i] < i) swap(a[i], a[bit_rev[i]]); fft(A, lg); fft(B, lg);
} for (int i = 0; i < (1 << lg); i++)
A[i] = A[i] * B[i];
108

inv_fft(A, lg); for (int i = 0; i < (int)mid.size(); i++)


{
vector<int> ans(a.size() + b.size(), 0); addmod(mid[i], -a0b0[i] + mod, mod);
for (int i = 0; i < (int)ans.size(); i++) addmod(mid[i], -a1b1[i] + mod, mod);
ans[i] = (int64_t)(A[i].x + 0.5) % mod; }

return ans; vector<int> res = a0b0;


} for (int i = 0; i < (int)res.size(); i++)
addmod(res[i], mulmod(base, mid[i], mod), mod);
vector<int> mult_mod(const vector<int> &a, const vector<int> &b)
{ base = mulmod(base, base, mod);
/// Thanks pavel.savchenkov for (int i = 0; i < (int)res.size(); i++)
addmod(res[i], mulmod(base, a1b1[i], mod), mod);
// a = a0 + sqrt(MOD) * a1
// a = a0 + base * a1 return res;
int base = (int)sqrtl(mod); }

vector<int> a0(a.size()), a1(a.size());


for (int i = 0; i < (int)a.size(); i++)
{ 6.7 FFT by XraY
a0[i] = a[i] % base;
a1[i] = a[i] / base;
typedef long double ld;
}
#define mp make_pair
#define eprintf(...) fprintf(stderr, __VA_ARGS__)
vector<int> b0(b.size()), b1(b.size());
#define sz(x) ((int)(x).size())
for (int i = 0; i < (int)b.size(); i++)
{
const ld pi = acos((ld) - 1);
b0[i] = b[i] % base;
//BEGIN ALGO
b1[i] = b[i] / base;
namespace FFT {
}
struct com {
ld x, y;
vector<int> a01 = a0;
for (int i = 0; i < (int)a.size(); i++)
com(ld _x = 0, ld _y = 0) : x(_x), y(_y) {}
addmod(a01[i], a1[i], mod);
inline com operator + (const com &c) const {
vector<int> b01 = b0;
return com(x + c.x, y + c.y);
for (int i = 0; i < (int)b.size(); i++)
}
addmod(b01[i], b1[i], mod);
inline com operator - (const com &c) const {
return com(x - c.x, y - c.y);
vector<int> C = mult(a01, b01); // 1
}
inline com operator * (const com &c) const {
vector<int> a0b0 = mult(a0, b0); // 2
return com(x * c.x - y * c.y, x * c.y + y * c.x);
vector<int> a1b1 = mult(a1, b1); // 3
}
inline com conj() const {
vector<int> mid = C;
return com(x, -y);
109

} int wit = len;


}; for (int it = 0, j = i + len; it < len; ++it, ++i,
++j) {
const static int maxk = 21, maxn = (1 << maxk) + 1; com tmp = a[j] * ws[wit++];
com ws[maxn]; a[j] = a[i] - tmp;
int dp[maxn]; a[i] = a[i] + tmp;
com rs[maxn]; }
int n, k; }
int lastk = -1; }
}
void fft(com *a, bool torev = 0) {
if (lastk != k) { com a[maxn];
lastk = k; int mult(int na, int *_a, int nb, int *_b, long long *ans) {
dp[0] = 0; if (!na || !nb) {
return 0;
for (int i = 1, g = -1; i < n; ++i) { }
if (!(i & (i - 1))) { for (k = 0, n = 1; n < na + nb - 1; n <<= 1, ++k) ;
++g; assert(n < maxn);
} for (int i = 0; i < n; ++i) {
dp[i] = dp[i ^ (1 << g)] ^ (1 << (k - 1 - g)); a[i] = com(i < na ? _a[i] : 0, i < nb ? _b[i] : 0);
} }
fft(a);
ws[1] = com(1, 0); a[n] = a[0];
for (int two = 0; two < k - 1; ++two) { for (int i = 0; i <= n - i; ++i) {
ld alf = pi / n * (1 << (k - 1 - two)); a[i] = (a[i] * a[i] - (a[n - i] * a[n - i]).conj()) *
com cur = com(cos(alf), sin(alf)); com(0, (ld) - 1 / n / 4);
a[n - i] = a[i].conj();
int p2 = (1 << two), p3 = p2 * 2; }
for (int j = p2; j < p3; ++j) { fft(a, 1);
ws[j * 2 + 1] = (ws[j * 2] = ws[j]) * cur; int res = 0;
} for (int i = 0; i < n; ++i) {
} long long val = (long long) round(a[i].x);
} assert(abs(val - a[i].x) < 1e-1);
for (int i = 0; i < n; ++i) { if (val) {
if (i < dp[i]) { assert(i < na + nb - 1);
swap(a[i], a[dp[i]]); while (res < i) {
} ans[res++] = 0;
} }
if (torev) { ans[res++] = val;
for (int i = 0; i < n; ++i) { }
a[i].y = -a[i].y; }
} return res;
} }
for (int len = 1; len < n; len <<= 1) { };
for (int i = 0; i < n; i += len) {
110

int main() unsigned long long c, g;


{
// ios_base::sync_with_stdio(0); c = g = 0x80000000;
// cin.tie(NULL); cout.tie(NULL); for (; ;){
// freopen("in.txt","r",stdin); if ((g * g) > n) g ^= c;
c >>= 1;
int test, cases = 1; if (!c) return g;
g |= c;
}
}
return 0;
} unsigned long long fast_cbrt(unsigned long long n){
int r = 63;
unsigned long long x, res = 0;

6.8 Fast Integer Cube and Square Root for (; r >= 0; r -= 3){
res <<= 1;
x = (res * (res + 1) * 3) + 1;
unsigned int fast_sqrt(unsigned int n){
if ((n >> r) >= x){
unsigned int c, g;
res++;
n -= (x << r);
c = g = 0x8000;
}
for (; ;){
}
if ((g * g) > n) g ^= c;
c >>= 1;
return res;
if (!c) return g;
}
g |= c;
}
int main(){
}
}
int fast_cbrt(int n){
int x, r = 30, res = 0;

for (; r >= 0; r -= 3){


res <<= 1;
6.9 Fast Walsh-Hadamard Transform
x = (3 * res * (res + 1)) + 1;
if ((n >> r) >= x){ const int N = 1<<16;
res++;
n -= (x << r); template <typename T>
} struct FWT {
} void fwt(T io[], int n) {
for (int d = 1; d < n; d <<= 1) {
return res; for (int i = 0, m = d<<1; i < n; i += m) {
} for (int j = 0; j < d; j++) { /// Don’t
forget modulo if required
unsigned long long fast_sqrt(unsigned long long n){ T x = io[i+j], y = io[i+j+d];
111

io[i+j] = (x+y), io[i+j+d] = (x-y); #include <bits/stdtr1c++.h>


// xor
// io[i+j] = x+y; // and #define MAX 1010
// io[i+j+d] = x+y; // or #define MOD 1000000007
} using namespace std;
} namespace fool{
} #define MAXN 10000
}
void ufwt(T io[], int n) { tr1::unordered_map <unsigned long long, int> mp;
for (int d = 1; d < n; d <<= 1) { int inv, P[MAX], binomial[MAX][MAX], dp[MAXN][MAX];
for (int i = 0, m = d<<1; i < n; i += m) {
for (int j = 0; j < d; j++) { /// Don’t long long expo(long long x, long long n){
forget modulo if required x %= MOD;
T x = io[i+j], y = io[i+j+d]; long long res = 1;
/// Modular inverse if required here
io[i+j] = (x+y)>>1, io[i+j+d] = while (n){
(x-y)>>1; // xor if (n & 1) res = (res * x) % MOD;
// io[i+j] = x-y; // and x = (x * x) % MOD;
// io[i+j+d] = y-x; // or n >>= 1;
} }
}
} return (res % MOD);
} }
// a, b are two polynomials and n is size which is power of two
void convolution(T a[], T b[], int n) { void init(){
fwt(a, n); int i, j;
fwt(b, n); mp.clear();
for (int i = 0; i < n; i++) inv = expo(2, MOD - 2);
a[i] = a[i]*b[i];
ufwt(a, n); P[0] = 1;
} for (i = 1; i < MAX; i++){
// for a*a P[i] = (P[i - 1] << 1);
void self_convolution(T a[], int n) { if (P[i] >= MOD) P[i] -= MOD;
fwt(a, n); }
for (int i = 0; i < n; i++)
a[i] = a[i]*a[i]; for (i = 0; i < MAX; i++){
ufwt(a, n); for (j = 0; j <= i; j++){
} if (i == j || !j) binomial[i][j] = 1;
}; else{
FWT<ll> fwt; binomial[i][j] = (binomial[i - 1][j] + binomial[i -
1][j - 1]);
if (binomial[i][j] >= MOD) binomial[i][j] -= MOD;
}
6.10 Faulhaber’s Formula (Custom Algorithm) }
}
112

for (i = 1; i < MAXN; i++){ long long faulhaber(unsigned long long n, int k){
long long x = 1; ///fool::init();
for (j = 0; j < MAX; j++){ return F(n, k);
dp[i][j] = dp[i - 1][j] + x; }
if (dp[i][j] >= MOD) dp[i][j] -= MOD; }
x = (x * i) % MOD;
} int main(){
} fool::init();
} int t, i, j;
long long n, k, res;
/// Returns (1^k + 2^k + 3^k + .... n^k) % MOD
long long F(unsigned long long n, int k){ cin >> t;
if (n < MAXN) return dp[n][k]; while (t--){
cin >> n >> k;
if (n == 1) return 1; res = fool::faulhaber(n, k);
if (n == 2) return (P[k] + 1) % MOD; cout << res << endl;
if (!k) return (n % MOD); }
if (k == 1){ return 0;
n %= MOD; }
return (((n * (n + 1)) % MOD) * inv) % MOD;
}

unsigned long long h = (n << 10LL) | k; /// Change hash function 6.11 Faulhaber’s Formula
according to limits of n and k
long long res = mp[h];
#include <stdio.h>
if (res) return res;
#include <string.h>
#include <stdbool.h>
if (n & 1) res = F(n - 1, k) + expo(n, k);
else{
#define MAX 2510
long long m, z;
#define MOD 1000000007
m = n >> 1;
#define clr(ar) memset(ar, 0, sizeof(ar))
res = (F(m, k) * P[k]) % MOD;
#define read() freopen("lol.txt", "r", stdin)
m--, res++;
int S[MAX][MAX], inv[MAX];
for (int i = 0; i <= k; i++){
z = (F(m, i) * binomial[k][i]) % MOD;
int expo(long long x, int n){
z = (z * P[i]) % MOD;
x %= MOD;
res += z;
long long res = 1;
}
}
while (n){
if (n & 1) res = (res * x) % MOD;
res %= MOD;
x = (x * x) % MOD;
return (mp[h] = res);
n >>= 1;
}
}
113

if(p % 2 == 0) { ll d = pow(base, p / 2, MOD); return (d * d) %


return (res % MOD); MOD; }
} return (pow(base, p - 1, MOD) * base) % MOD;
}
void Generate(){
int i, j; ll inv(ll x, ll MOD) { return pow(x, MOD - 2, MOD); }
for (i = 0; i < MAX; i++) inv[i] = expo(i, MOD - 2);
// If MOD equals 2, it becomes XOR operation and we can use vector of
S[0][0] = 1; bitsets to build equation
for (i = 1; i < MAX; i++){ // Complexity becomes 1/32
S[i][0] = 0;
for (j = 1; j <= i; j++){ ll gauss(vector<vector<ll> > &a, ll MOD)
S[i][j] = ( ((long long)S[i - 1][j] * j) + S[i - 1][j - 1]) % {
MOD; int n = a.size(), m = a[0].size() - 1;
}
} for(int i = 0; i < n; i++)
} for(int j = 0; j <= m; j++)
a[i][j] = (a[i][j] % MOD + MOD) % MOD;
int faulhaber(long long n, int k){
n %= MOD; vector<int> where(m, -1);
if (!k) return n; for(int col = 0, row = 0; col < m && row < n; col++)
{
int j; int sel = row;
long long res = 0, p = 1; for(int i = row; i < n; i++)
for (j = 0; j <= k; j++){ if(a[i][col] > a[sel][col])
p = (p * (n + 1 - j)) % MOD; sel = i;
res = (res + (((S[k][j] * p) % MOD) * inv[j + 1])) % MOD;
} if(a[sel][col] == 0) { where[col] = -1; continue;
}
return (res % MOD);
} for(int i = col; i <= m; i++)
swap(a[sel][i], a[row][i]);
int main(){ where[col] = row;
Generate();
printf("%d\n", faulhaber(1001212, 1000)); ll c_inv = inv(a[row][col], MOD);
return 0; for(int i = 0; i < n; i++)
} if(i != row)
{
if(a[i][col] == 0) continue;
ll c = (a[i][col] * c_inv) % MOD;
6.12 Gauss Elimination Equations Mod Number Solutions for(int j = 0; j <= m; j++)
a[i][j] = (a[i][j] - c * a[row][j] % MOD
+ MOD) % MOD;
ll pow(ll base, ll p, ll MOD)
}
{
if(p == 0) return 1;
114

row++; #include <vector>


} #include <cmath>
vector<ll> ans(m, 0);
ll result = 1; using namespace std;
// for counting rank, take the count of where[i]==-1
for(int i = 0; i < m; i++) const double EPS = 1e-10;
if(where[i] != -1) ans[i] = (a[where[i]][m] * inv(a[where[i]][i],
MOD)) % MOD; typedef vector<int> VI;
else result = (result * MOD) % mod; typedef double T;
// This is validity check probably.May not be needed typedef vector<T> VT;
for(int i = 0; i < n; i++) typedef vector<VT> VVT;
{
ll sum = a[i][m] % MOD; T GaussJordan(VVT &a, VVT &b) {
for(int j = 0; j < m; j++) const int n = a.size();
sum = (sum + MOD - (ans[j] * a[i][j]) % MOD) % MOD; const int m = b[0].size();
VI irow(n), icol(n), ipiv(n);
if(sum != 0) return 0; T det = 1;
}
for (int i = 0; i < n; i++) {
return result; int pj = -1, pk = -1;
} for (int j = 0; j < n; j++) if (!ipiv[j])
for (int k = 0; k < n; k++) if (!ipiv[k])
if (pj == -1 || fabs(a[j][k]) > fabs(a[pj][pk])) { pj = j; pk = k;
}
6.13 Gauss Jordan Elimination if (fabs(a[pj][pk]) < EPS) { cerr << "Matrix is singular." << endl;
exit(0); }
ipiv[pk]++;
// Gauss-Jordan elimination with full pivoting.
swap(a[pj], a[pk]);
//
swap(b[pj], b[pk]);
// Uses:
if (pj != pk) det *= -1;
// (1) solving systems of linear equations (AX=B)
irow[i] = pj;
// (2) inverting matrices (AX=I)
icol[i] = pk;
// (3) computing determinants of square matrices
//
T c = 1.0 / a[pk][pk];
// Running time: O(n^3)
det *= a[pk][pk];
//
a[pk][pk] = 1.0;
// INPUT: a[][] = an nxn matrix
for (int p = 0; p < n; p++) a[pk][p] *= c;
// b[][] = an nxm matrix
for (int p = 0; p < m; p++) b[pk][p] *= c;
//
for (int p = 0; p < n; p++) if (p != pk) {
// OUTPUT: X = an nxm matrix (stored in b[][])
c = a[p][pk];
// A^{-1} = an nxn matrix (stored in a[][])
a[p][pk] = 0;
// returns determinant of a[][]
for (int q = 0; q < n; q++) a[p][q] -= a[pk][q] * c;
for (int q = 0; q < m; q++) b[p][q] -= b[pk][q] * c;
// Example used: LightOJ Snakes and Ladders
}
}
#include <iostream>
115

}
for (int p = n-1; p >= 0; p--) if (irow[p] != icol[p]) {
for (int k = 0; k < n; k++) swap(a[k][irow[p]], a[k][icol[p]]);
}
6.14 Gauss Xor
return det;
}
const int MAXN = (1 << 20);
const int MAXLOG = 64;
int main() {
const int n = 4;
struct basis
const int m = 2;
{
double A[n][n] = { {1,2,3,4},{1,0,1,0},{5,3,2,4},{6,1,4,6} };
int64_t base[MAXLOG];
double B[n][m] = { {1,2},{4,3},{5,6},{8,7} };
VVT a(n), b(n);
void clear()
for (int i = 0; i < n; i++) {
{
a[i] = VT(A[i], A[i] + n);
for(int i = MAXLOG - 1; i >= 0; i--)
b[i] = VT(B[i], B[i] + m);
base[i] = 0;
}
}
double det = GaussJordan(a, b);
void add(int64_t val)
{
// expected: 60
for(int i = MAXLOG - 1; i >= 0; i--)
cout << "Determinant: " << det << endl;
if((val >> i) & 1)
{
// expected: -0.233333 0.166667 0.133333 0.0666667
if(!base[i]) { base[i] = val; return; }
// 0.166667 0.166667 0.333333 -0.333333
else val ^= base[i];
// 0.233333 0.833333 -0.133333 -0.0666667
}
// 0.05 -0.75 -0.1 0.2
}
cout << "Inverse: " << endl;
for (int i = 0; i < n; i++) {
inline int size()
for (int j = 0; j < n; j++)
{
cout << a[i][j] << ’ ’;
int sz = 0;
cout << endl;
for(int i = 0; i < MAXLOG; i++)
}
sz += (bool)(base[i]);
return sz;
// expected: 1.63333 1.3
}
// -0.166667 0.5
// 2.36667 1.7
int64_t max_xor()
// -1.85 -1.35
{
cout << "Solution: " << endl;
int64_t res = 0;
for (int i = 0; i < n; i++) {
for(int i = MAXLOG - 1; i >= 0; i--)
for (int j = 0; j < m; j++)
if(!((res >> i) & 1) && base[i])
cout << b[i][j] << ’ ’;
res ^= base[i];
cout << endl;
}
return res;
116

}
// *****may return empty vector
bool can_create(int64_t val)
{ vector<double> gauss(vector<vector<double>> &a)
for(int i = MAXLOG - 1; i >= 0; i--) {
if(((val >> i) & 1) && base[i]) int n = a.size(), m = a[0].size() - 1;
val ^= base[i];
vector<int> where(m, -1);
return (val == 0); for(int col = 0, row = 0; col < m && row < n; col++)
} {
}; int sel = row;
for(int i = row; i < n; i++)
if(abs(a[i][col]) > abs(a[sel][col]))
sel = i;
6.15 Gaussian 1
if(abs(a[sel][col]) < eps) { where[col] = -1; continue; }
void gauss(vector< vector<double> > &A) {
for(int i = col; i <= m; i++)
int n = A.size(); swap(a[sel][i], a[row][i]);
where[col] = row;
for(int i = 0; i < n; i++){
int r = i; for(int i = 0; i < n; i++)
for(int j = i+1; j < n; j++) if(i != row)
if(fabs(A[j][i]) > fabs(A[r][i])) {
r = j; if(abs(a[i][col]) < eps) continue;
if(fabs(A[r][i]) < EPS) continue; double c = a[i][col] / a[row][col];
if(r != i) for(int j = 0; j <= m; j++)
for(int j = 0; j <= n; j++) a[i][j] -= c * a[row][j];
swap(A[r][j], A[i][j]); }
for(int k = 0; k < n; k++){
if(k != i){ row++;
for(int j = n; j >= i; j--) }
A[k][j] -= A[k][i]/A[i][i]*A[i][j];
} vector<double> ans(m, 0);
} for(int i = 0; i < m; i++)
} if(where[i] != -1)
ans[i] = a[where[i]][m] / a[where[i]][i];
// solve: A[x][n]/A[x][x] for each x
} // Validity check?
// May need to remove the following code

for(int i = 0; i < n; i++)


6.16 Gaussian 2 {
double sum = a[i][m];
for(int j = 0; j < m; j++)
const double eps = 1e-9;
117

sum -= ans[j] * a[i][j]; }

if(abs(sum) > eps) return vector<double>(); karatsuba(h, x, y, z);


} for (i = 0; i < n; i++) z[i] -= (res[i] + res[i + n]);
for (i = 0; i < n; i++){
return ans; res[i + h] = (res[i + h] + z[i]) % MOD;
} if (res[i + h] < 0) res[i + h] += MOD;
}
ptr -= (h + h + n);
}
6.17 Karatsuba
/// multiplies two polynomial a(degree n) and b(degree m) and returns the
result modulo MOD in a
#define MAX 131072 /// Must be a power of 2
/// returns the degree of the multiplied polynomial
#define MOD 1000000007
/// note that a and b are changed in the process
unsigned long long temp[128];
int mul(int n, int *a, int m, int *b){ /// hash = 903808
int ptr = 0, buffer[MAX * 6];
int i, r, c = (n < m ? n : m), d = (n > m ? n : m), *res = buffer +
ptr;
/// n is a power of 2
r = 1 << (32 - __builtin_clz(d) - (__builtin_popcount(d) == 1));
void karatsuba(int n, int *a, int *b, int *res){ /// hash = 829512
for (i = d; i < r; i++) a[i] = b[i] = 0;
int i, j, h;
for (i = c; i < d && n < m; i++) a[i] = 0;
if (n < 17){ /// Reduce recursive calls by setting a threshold
for (i = c; i < d && m < n; i++) b[i] = 0;
for (i = 0; i < (n + n); i++) temp[i] = 0;
for (i = 0; i < n; i++){
ptr += (r << 1), karatsuba(r, a, b, res), ptr -= (r << 1);
if (a[i]){
for (i = 0; i < (r << 1); i++) a[i] = res[i];
for (j = 0; j < n; j++){
return (n + m - 1);
temp[i + j] += ((long long)a[i] * b[j]);
}
}
}
int a[MAX * 2], b[MAX * 2];
}
for (i = 0; i < (n + n); i++) res[i] = temp[i] % MOD;
int main(){
return;
int i, j, k, n = MAX - 10;
}
for (i = 0; i < n; i++) a[i] = ran(1, 1000000000);
for (i = 0; i < n; i++) b[i] = ran(1, 991929183);
h = n >> 1;
clock_t start = clock();
karatsuba(h, a, b, res);
mul(n, a, n, b);
karatsuba(h, a + h, b + h, res + n);
dbg(a[n / 2]);
int *x = buffer + ptr, *y = buffer + ptr + h, *z = buffer + ptr +
for (i = 0; i < (n << 1); i++){
h + h;
if (a[i] < 0) puts("YO");
}
ptr += (h + h + n);
printf("%0.5f\n", (clock() - start) / (1.0 * CLOCKS_PER_SEC));
for (i = 0; i < h; i++){
return 0;
x[i] = a[i] + a[i + h], y[i] = b[i] + b[i + h];
}
if (x[i] >= MOD) x[i] -= MOD;
if (y[i] >= MOD) y[i] -= MOD;
118

6.18 Linear Diophantine return 0;


int lx1 = x;
shift_solution (x, y, a, b, (maxx - x) / b);
int extended_euclid(int a, int b, int &x, int &y) {
if (x > maxx)
int xx = y = 0;
shift_solution (x, y, a, b, -sign_b);
int yy = x = 1;
int rx1 = x;
while (b) {
shift_solution (x, y, a, b, - (miny - y) / a);
int q = a / b;
if (y < miny)
int t = b; b = a%b; a = t;
shift_solution (x, y, a, b, -sign_a);
t = xx; xx = x - q*xx; x = t;
if (y > maxy)
t = yy; yy = y - q*yy; y = t;
return 0;
}
int lx2 = x;
return a;
shift_solution (x, y, a, b, - (maxy - y) / a);
}
if (y > maxy)
// Linear Diophantine Equation Solution: Given, a*x+b*y=c. Find valid x
shift_solution (x, y, a, b, sign_a);
and y if possible.
int rx2 = x;
bool linear_diophantine (int a, int b, int c, int & x0, int & y0, int &
g) {
if (lx2 > rx2)
g = extended_euclid (abs(a), abs(b), x0, y0);
swap (lx2, rx2);
if (c % g != 0)
int lx = max (lx1, lx2);
return false;
int rx = min (rx1, rx2);
x0 *= c / g;
y0 *= c / g;
return (rx - lx) / abs(b) + 1;
if (a < 0) x0 *= -1;
}
if (b < 0) y0 *= -1;
return true;
}
// for each integer k, // x1 = x + k * b/g // y1 = y - k * a/g
// is a solution to the equation where g = gcd(a,b).
6.19 Matrix Expo
void shift_solution (int & x, int & y, int a, int b, int cnt) {
x += cnt * b; struct Matrix
y -= cnt * a; {
} ll mat[MAX][MAX];
// Now How many solution where x in range[x1,x2] and y in range[y1,y2] ? Matrix(){}
int find_all_solutions(int a,int b,int c,int &minx,int &maxx,int // This initialization is important.
&miny,int &maxy) // Input matrix should be initialized separately
{ void init(int sz)
int x,y,g; {
if(linear_diophantine(a,b,c,x,y,g) == 0) return 0; ms(mat,0);
a/=g, b/=g; for(int i=0; i<sz; i++) mat[i][i]=1;
int sign_a = a>0 ? +1 : -1; }
int sign_b = b>0 ? +1 : -1; } aux;
shift_solution (x, y, a, b, (minx - x) / b);
if (x < minx) void matMult(Matrix &m, Matrix &m1, Matrix &m2, int sz)
shift_solution (x, y, a, b, sign_b); {
if (x > maxx) ms(m.mat,0);
119

// This only works for square matrix aux=ret;


FOR(i,0,sz) matMult(ret,aux,P,sz);
{ }
FOR(j,0,sz)
{ n>>=1;
FOR(k,0,sz)
{ aux=P; matMult(P,aux,aux,sz);
m.mat[i][k]=(m.mat[i][k]+m1.mat[i][j]*m2.mat[j][k])%mod; }
}
} return ret;
} }
/* We can also do this if MOD*MOD fits long long
long long MOD2 = MOD * MOD;
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++) { 6.20 Number Theoretic Transform
long long tmp = 0;
for(int k = 0; k < n; k++) {
const ll mod=786433;
// Since A and B are taken modulo MOD, the
product A[i][k] * B[k][j] is
vi getdivs(int p)
// not more than MOD * MOD.
{
tmp += A[i][k] * 1ll * B[k][j];
int q=p-1;
while(tmp >= MOD2) // Taking modulo MOD2 is
vi div;
easy, because we can do it by subtraction
for(int j=2; j*j<=q; j++)
tmp -= MOD2;
{
}
if(q%j==0)
result[i][j] = tmp % MOD; // One % operation per resulting
{
element
div.pb(j);
}
while(q%j==0) q/=j;
*/
}
}
}
if(q!=1) div.pb(q);
Matrix expo(Matrix &M, int n, int sz)
return div;
{
}
Matrix ret;
ret.init(sz);
bool check(int e, int p, vi divs)
{
if(n==0) return ret;
for(auto d: divs)
if(n==1) return M;
{
if(bigmod((ll)e,(ll)(p-1)/d,(ll)p)==1)
Matrix P=M;
return false;
}
while(n!=0)
return true;
{
}
if(n&1)
{
int getRoot(int p)
120

{ base u = a[j+k], v = a[j+k+i/2] *


int e=2; roots[step * k] % mod;
vi divs=getdivs(p); a[j+k] = (u+v+mod)% mod;
while(!check(e,p,divs)) e++; a[j+k+i/2] = (u-v+mod)%mod;
return e; }
} }
}
/* getRoot(mod) returns a value which is used as prr in the following code if(inv) for(int i=0; i<n; i++) a[i] *= ipow(n, mod-2),
and G in the next one */ a[i] %= mod;
// Code 1 }
ll ipow(ll a, ll b, ll m = mod) vector<ll> multiply(vector<ll> &v, vector<ll> &w){
{ vector<base> fv(v.begin(), v.end()), fw(w.begin(),
ll ret = 1; w.end());
while (b) int n = 2; while(n < v.size() + w.size()) n <<= 1;
{ fv.resize(n); fw.resize(n);
if (b & 1) ret = ret * a % m; fft(fv, 0); fft(fw, 0);
a = a * a % m; for(int i=0; i<n; i++) fv[i] *= fw[i];
b >>= 1; fft(fv, 1);
} vector<ll> ret(n);
return ret; for(int i=0; i<n; i++) ret[i] = fv[i];
} return ret;
namespace fft{ }
typedef ll base; }
void fft(vector<base> &a, bool inv){
int n = a.size(), j = 0; // Code 2
vector<base> roots(n/2); struct NTT
for(int i=1; i<n; i++){ {
int bit = (n >> 1); vi A, B, w[2], rev;
while(j >= bit){ ll P, M, G;
j -= bit; NTT(ll mod) {P=mod; G=10;}
bit >>= 1; void init(ll n)
} {
j += bit; for(M=2; M<n; M<<=1);
if(i < j) swap(a[i], a[j]); M<<=1;
} A.resize(M); B.resize(M);
int prr = 10; // Got from calling getRoot(mod); w[0].resize(M); w[1].resize(M); rev.resize(M);
int ang = ipow(prr, (mod - 1) / n);
if(inv) ang = ipow(ang, mod - 2); for(ll i=0; i<M; i++)
for(int i=0; i<n/2; i++){ {
roots[i] = (i ? (1ll * roots[i-1] * ang % mod) : 1); ll x=i, &y=rev[i];
} y=0;
for(int i=2; i<=n; i<<=1){ for(ll k=1; k<M; k<<=1, x>>=1)
int step = n / i; (y<<=1)|=(x&1);
for(int j=0; j<n; j+=i){ }
for(int k=0; k<i/2; k++){
121

ll x=bigmod(G,(P-1)/M,mod); res.resize(M);
ll y=bigmod(x,P-2,mod); for(ll i=0; i<M; i++) res[i]=A[i]*1LL*B[i]%P;
ntransform(res,1);
w[0][0]=w[1][0]=1LL; }
};
for(ll i=1; i<M; i++)
{
w[0][i]=(w[0][i-1]*x)%P;
w[1][i]=(w[1][i-1]*y)%P; 6.21 Segmented Sieve
}
}
#define MAX 1000010
void ntransform(vector<ll> &a, ll f)
{
#define BASE_SQR 216
for(ll i=0; i<M; i++)
#define BASE_LEN 10010
{
#define BASE_MAX 46656
if(i<rev[i]) swap(a[i], a[rev[i]]);
#define chkbit(ar, i) (((ar[(i) >> 6]) & (1 << (((i) >> 1) & 31))))
}
#define setbit(ar, i) (((ar[(i) >> 6]) |= (1 << (((i) >> 1) & 31))))
for(ll i=1; i<M; i<<=1)
{
int p, primes[BASE_LEN];
for(ll j=0, t=M/(i<<1); j<M; j+=(i<<1))
unsigned int base[(BASE_MAX >> 6) + 5], isprime[(MAX >> 6) + 5];
{
for(ll k=0, l=0; k<i; k++, l+=t)
void Sieve(){
{
clr(base);
ll x=a[j+k+i]*1LL*w[f][l]%P;
int i, j, k;
ll y=a[j+k];
a[j+k+i]=y-x<0?y-x+P:y-x;
for (i = 3; i < BASE_SQR; i++, i++){
a[j+k]=y+x>=P?y+x-P:y+x;
if (!chkbit(base, i)){
}
k = i << 1;
}
for (j = (i * i); j < BASE_MAX; j += k){
}
setbit(base, j);
if(f)
}
{
}
ll x=bigmod(M,P-2,mod);
}
for(ll i=0; i<M; i++) a[i]=a[i]*1LL*x%P;
}
p = 0;
}
for (i = 3; i < BASE_MAX; i++, i++){
void multiply(vector<ll> &X, vector<ll> &Y, vector<ll> &res)
if (!chkbit(base, i)){
{
primes[p++] = i;
init(max(X.size(),Y.size()));
}
for(ll i=0; i<M; i++) A[i]=B[i]=0;
}
for(ll i=0; i<X.size(); i++) A[i]=X[i];
}
for(ll i=0; i<Y.size(); i++) B[i]=Y[i];
ntransform(A,0);
int SegmentedSieve(long long a, long long b){
ntransform(B,0);
long long j, k, x;
res.clear();
int i, d, counter = 0;
122

6.22 Sieve (Bitmask)


if (a <= 2 && 2 <= b) counter = 1; /// 2 is counted separately if in
range
#define LEN 78777
if (!(a & 1)) a++;
#define MAX 1000010
if (!(b & 1)) b--;
#define chkbit(ar, i) (((ar[(i) >> 6]) & (1 << (((i) >> 1) & 31))))
if (a > b) return counter;
#define setbit(ar, i) (((ar[(i) >> 6]) |= (1 << (((i) >> 1) & 31))))
#define isprime(x) (( (x) && ((x)&1) && (!chkbit(ar, (x)))) || ((x) == 2))
clr(isprime);
for (i = 0; i < p; i++){
int p, prime[LEN];
x = primes[i];
unsigned int ar[(MAX >> 6) + 5] = {0};
if ((x * x) > b) break;
void Sieve(){
k = x << 1;
int i, j, k;
j = x * ((a + x - 1) / x);
setbit(ar, 0), setbit(ar, 1);
if (!(j & 1)) j += x;
else if (j == x) j += k;
for (i = 3; (i * i) < MAX; i++, i++){
if (!chkbit(ar, i)){
while (j <= b){
k = i << 1;
setbit(isprime, j - a);
for (j = (i * i); j < MAX; j += k) setbit(ar, j);
j += k;
}
}
}
}
p = 0;
/// Other primes in the range except 2 are added here
prime[p++] = 2;
d = (b - a + 1);
for (i = 3; i < MAX; i++, i++){
for (i = 0; i < d; i++, i++){
if (isprime(i)) prime[p++] = i;
if (!chkbit(isprime, i) && (a + i) != 1) counter++;
}
}
}
return counter;
int main(){
}
Sieve();
printf("%d\n", p);
int main(){
int i;
Sieve();
for (i = 0; i < 60; i++){
int T = 0, t, i, j, a, b;
if (isprime(i)) printf("%d\n", i);
}
scanf("%d", &t);
}
while (t--){
scanf("%d %d", &a, &b);
printf("Case %d: %d\n", ++T, SegmentedSieve(a, b));
} 6.23 Sieve
return 0;
}
vi primes;
bool status[MAX+7];
123

// Finds all the primes upto MAX


#define MAX 107
void sieve() #define INF 1000000007
{ #define EPS (1e-12)
for(int i=4; i<=MAX; i+=2)
status[i]=true; void Pivot( long m,long n,double A[MAX+7][MAX+7],long *B,long *N,long
r,long c )
for(int i=3; i*i<=MAX; i++) {
{ long i,j;
if(!status[i]) swap( N[c],B[r] );
{ A[r][c] = 1/A[r][c];
for(int j=i*i; j<=MAX; j+=i+i) for( j=0;j<=n;j++ ) if( j!=c ) A[r][j] *= A[r][c];
status[j]=true; for( i=0;i<=m;i++ ){
} if( i!=r ){
} for( j=0;j<=n;j++ ) if( j!=c ) A[i][j] -= A[i][c]*A[r][j];
A[i][c] = -A[i][c]*A[r][c];
primes.pb(2); }
}
FOR(i,3,MAX) }
{
if(!status[i]) long Feasible( long m,long n,double A[MAX+7][MAX+7],long *B,long *N )
primes.pb(i); {
} long r,c,i;
} double p,v;
while( 1 ){
for( p=INF,i=0;i<m;i++ ) if( A[i][n]<p ) p = A[r=i][n];
if( p > -EPS ) return 1;
6.24 Simplex for( p=0,i=0;i<n;i++ ) if( A[r][i]<p ) p = A[r][c=i];
if( p > -EPS ) return 0;
p = A[r][n]/A[r][c];
/*
for( i=r+1;i<m;i++ ){
* Algorithm : Simplex ( Linear Programming )
if( A[i][c] > EPS ){
* Author : Simon Lo
v = A[i][n]/A[i][c];
* Note: Simplex algorithm on augmented matrix a of dimension (m+1)x(n+1)
if( v<p ) r=i,p=v;
* returns 1 if feasible, 0 if not feasible, -1 if unbounded
}
* returns solution in b[] in original var order, max(f) in ret
}
* form: maximize sum_j(a_mj*x_j)-a_mn s.t. sum_j(a_ij*x_j)<=a_in
Pivot( m,n,A,B,N,r,c );
* in standard form.
}
* To convert into standard form:
}
* 1. if exists equality constraint, then replace by both >= and <=
* 2. if variable x doesn’t have nonnegativity constraint, then replace by
long Simplex( long m,long n,double A[MAX+7][MAX+7],double *b,double &Ret )
* difference of 2 variables like x1-x2, where x1>=0, x2>=0
{
* 3. for a>=b constraints, convert to -a<=-b
long B[MAX+7],N[MAX+7],r,c,i;
* note: watch out for -0.0 in the solution, algorithm may cycle
double p,v;
* EPS = 1e-7 may give wrong answer, 1e-10 is better
for( i=0;i<n;i++ ) N[i] = i;
*/
124

for( i=0;i<m;i++ ) B[i] = n+i; for (int i = 0; i < m + 2; i++) if (i != r)


if( !Feasible( m,n,A,B,N ) ) return 0; for (int j = 0; j < n + 2; j++) if (j != s)
while( 1 ){ D[i][j] -= D[r][j] * D[i][s] * inv;
for( p=0,i=0;i<n;i++ ) if( A[m][i] > p ) p = A[m][c=i]; for (int j = 0; j < n + 2; j++) if (j != s) D[r][j] *= inv;
if( p<EPS ){ for (int i = 0; i < m + 2; i++) if (i != r) D[i][s] *= -inv;
for( i=0;i<n;i++ ) if( N[i]<n ) b[N[i]] = 0; D[r][s] = inv;
for( i=0;i<m;i++ ) if( B[i]<n ) b[B[i]] = A[i][n]; swap(B[r], N[s]);
Ret = -A[m][n]; }
return 1;
} bool Simplex(int phase) {
for( p=INF,i=0;i<m;i++ ){ int x = phase == 1 ? m + 1 : m;
if( A[i][c] > EPS ){ while (true) {
v = A[i][n]/A[i][c]; int s = -1;
if( v<p ) p = v,r = i; for (int j = 0; j <= n; j++) {
} if (phase == 2 && N[j] == -1) continue;
} if (s == -1 || D[x][j] < D[x][s] || D[x][j] == D[x][s] && N[j] <
if( p==INF ) return -1; N[s]) s = j;
Pivot( m,n,A,B,N,r,c ); }
} if (D[x][s] > -EPS) return true;
} int r = -1;
for (int i = 0; i < m; i++) {
// Caution: long double can give TLE if (D[i][s] < EPS) continue;
typedef long double ld; if (r == -1 || D[i][n + 1] / D[i][s] < D[r][n + 1] / D[r][s] ||
typedef vector<ld> vd; (D[i][n + 1] / D[i][s]) == (D[r][n + 1] / D[r][s]) && B[i]
typedef vector<vd> vvd; < B[r]) r = i;
}
const ld EPS = 1e-10; if (r == -1) return false;
Pivot(r, s);
struct LPSolver { }
int m, n; }
vi B, N;
vvd D; ld Solve(vd &x) {
int r = 0;
LPSolver(const vvd &A, const vd &b, const vd &c) : for (int i = 1; i < m; i++) if (D[i][n + 1] < D[r][n + 1]) r = i;
m(b.size()), n(c.size()), N(n + 1), B(m), D(m + 2, vd(n + 2)) { if (D[r][n + 1] < -EPS) {
for (int i = 0; i < m; i++) for (int j = 0; j < n; j++) D[i][j] = Pivot(r, n);
A[i][j]; if (!Simplex(1) || D[m + 1][n + 1] < -EPS) return
for (int i = 0; i < m; i++) { B[i] = n + i; D[i][n] = -1; D[i][n + 1] -numeric_limits<ld>::infinity();
= b[i]; } for (int i = 0; i < m; i++) if (B[i] == -1) {
for (int j = 0; j < n; j++) { N[j] = j; D[m][j] = -c[j]; } int s = -1;
N[n] = -1; D[m + 1][n] = 1; for (int j = 0; j <= n; j++)
} if (s == -1 || D[i][j] < D[i][s] || D[i][j] == D[i][s] && N[j]
< N[s]) s = j;
void Pivot(int r, int s) { Pivot(i, s);
ld inv = 1.0 / D[r][s]; }
125

} 6.25 Sum of Kth Power


if (!Simplex(2)) return numeric_limits<ld>::infinity();
x = vd(n);
for (int i = 0; i < m; i++) if (B[i] < n) x[B[i]] = D[i][n + 1]; LL mod;
return D[m][n + 1]; LL S[105][105];
} // Find 1^k+2^k+...+n^k % mod
}; void solve() {
LL n, k;
/* Equations are of the matrix form Ax<=b, and we want to maximize scanf("%lld %lld %lld", &n, &k, &mod);
the function c. We are given coeffs of A, b and c. In case of minimizing, /*
we negate the coeffs of c and maximize it. Then the negative of returned x^k = sum (i=1 to k) Stirling2(k, i) * i! * ncr(x, i)
’value’ is the answer. sum (x = 0 to n) x^k
All the constraints should be in <= form. So we may need to negate the = sum (i = 0 to k) Stirling2(k, i) * i! * sum (x = 0 to n)
coeffs. ncr(x, i)
*/ = sum (i = 0 to k) Stirling2(k, i) * i! * ncr(n + 1, i + 1)
= sum (i = 0 to k) Stirling2(k, i) * i! * (n + 1)! / (i +
int main() { 1)! / (n - i)!
= sum (i = 0 to k) Stirling2(k, i) * (n - i + 1) * (n - i
const int m = 4; + 2) * ... (n + 1) / (i + 1)
const int n = 3; */
ld _A[m][n] = { S[0][0] = 1 % mod;
{ 6, -1, 0 }, for (int i = 1; i <= k; i++) {
{ -1, -5, 0 }, for (int j = 1; j <= i; j++) {
{ 1, 5, 1 }, if (i == j) S[i][j] = 1 % mod;
{ -1, -5, -1 } else S[i][j] = (j * S[i - 1][j] + S[i - 1][j - 1])
}; % mod;
ld _b[m] = { 10, -4, 5, -5 }; }
ld _c[n] = { 1, -1, 0 }; }

vvd A(m); LL ans = 0;


vd b(_b, _b + m); for (int i = 0; i <= k; i++) {
vd c(_c, _c + n); LL fact = 1, z = i + 1;
for (int i = 0; i < m; i++) A[i] = vd(_A[i], _A[i] + n); for (LL j = n - i + 1; j <= n + 1; j++) {
LL mul = j;
LPSolver solver(A, b, c); if (mul % z == 0) {
vd x; mul /= z;
ld value = solver.Solve(x); z /= z;
}
cerr << "VALUE: " << value << endl; // VALUE: 1.29032 fact = (fact * mul) % mod;
cerr << "SOLUTION:"; // SOLUTION: 1.74194 0.451613 1 }
for (size_t i = 0; i < x.size(); i++) cerr << " " << x[i]; ans = (ans + S[k][i] * fact) % mod;
cerr << endl; }
return 0; printf("%lld\n", ans);
} }
126

7 Miscellaneous int dp_left[N][20], dp_right[N][20];


int ans[N];
7.1 Bit Hacks
void solve(int L, int R, vi all)
{
unsigned int reverse_bits(unsigned int v){ if(L>R || all.empty()) return;
v = ((v >> 1) & 0x55555555) | ((v & 0x55555555) << 1); // initialize only this range
v = ((v >> 2) & 0x33333333) | ((v & 0x33333333) << 2); FOR(i,L,R+1) FOR(j,0,m) dp_left[i][j] = 0, dp_right[i][j] = 0;
v = ((v >> 4) & 0x0F0F0F0F) | ((v & 0x0F0F0F0F) << 4);
v = ((v >> 8) & 0x00FF00FF) | ((v & 0x00FF00FF) << 8); int mid = (L+R)/2;
return ((v >> 16) | (v << 16));
} dp_left[mid][0] = 1;
dp_right[mid-1][0] = 1;
/// Returns i if x = 2^i and 0 otherwise // calculate the number of subsequences starting from mid-1 to L
int bitscan(unsigned int x){ in dp_left
__asm__ volatile("bsf %0, %0" : "=r" (x) : "0" (x)); for(int i=mid-1; i>=L; i--)
return x; {
} for(int j=0; j<m; j++)
{
/// Returns next number with same number of 1 bits int taken = (a[i]+j)%m;
unsigned int next_combination(unsigned int x){
unsigned int y = x & -x; dp_left[i][taken] = (dp_left[i][taken] +
x += y; dp_left[i+1][j]) % mod;
unsigned int z = x & -x; dp_left[i][j] = (dp_left[i][j] + dp_left[i+1][j]) %
z -= y; mod;
z = z >> bitscan(z & -z); }
return x | (z >> 1); }
} // calculate the number of subsequences starting from mid to R in
dp_right
int main(){ for(int i=mid; i<=R; i++)
} {
for(int j=0; j<m; j++)
{
int taken = (a[i]+j)%m;
7.2 Divide and Conquer on Queries
dp_right[i][taken] = (dp_right[i][taken] +
dp_right[i-1][j]) % mod;
/* You are given an array a[] of size n, an integer m and bunch of
dp_right[i][j] = (dp_right[i][j] +
queries (l,r).
dp_right[i-1][j]) % mod;
For each query, you have to answer the number of subsequences of the
}
subarray (a[l]...a[r])
}
whose sum is divisible by m.
*/
vi ls, rs;
const int N = 2 * MAX + 7;
for(auto idx: all)
int n, m, a[N];
{
int input[N][3]; // inputs queries, answer sored in input[i][2]
127

int l = input[idx][0], r = input[idx][1]; inline int64_t gilbertOrder(int x, int y, int pow, int rotate) {
if (pow == 0) {
if(l>mid) rs.pb(idx); return 0;
else if(r<mid) ls.pb(idx); }
else int hpow = 1 << (pow-1);
{ int seg = (x < hpow) ? (
if(l==r && l==mid) // query is just on mid, (y < hpow) ? 0 : 3
specially handled ) : (
{ (y < hpow) ? 1 : 2
ans[idx] = ((a[mid] % m == 0) ? 2: 1); );
} seg = (seg + rotate) & 3;
else if(l==mid) // starts from mid const int rotateDelta[4] = {3, 0, 0, 1};
{ int nx = x & (x ^ hpow), ny = y & (y ^ hpow);
ans[idx] = dp_right[r][0]; int nrot = (rotate + rotateDelta[seg]) & 3;
} int64_t subSquareSize = int64_t(1) << (2*pow - 2);
else if(r==mid) // ends in mid int64_t ans = seg * subSquareSize;
{ int64_t add = gilbertOrder(nx, ny, pow-1, nrot);
int rem = a[mid] % m; ans += (seg == 1 || seg == 2) ? add : (subSquareSize - add - 1);
return ans;
ans[idx] = dp_left[l][0]; }
ans[idx] = (ans[idx] +
dp_left[l][(m-rem)%m]) % mod; struct Query {
} int l, r, idx; // queries
else int64_t ord; // Gilbert order of a query
{ // call query[i].calcOrder() to calculate the Gilbert orders
// merge both sides and calculate answer for inline void calcOrder() {
current query ord = gilbertOrder(l, r, 21, 0);
for(int j=0; j<m; j++) }
{ };
ans[idx] = (ans[idx] + (dp_left[l][j] // sort the queries based on the Gilbert order
* dp_right[r][(m-j)%m]) % mod) % inline bool operator<(const Query &a, const Query &b) {
mod; return a.ord < b.ord;
} }
}
}
}
// find answer for other queries by divide and conquer 7.4 HakmemItem175
solve(L,mid,ls);
solve(mid+1,R,rs);
/// Only for non-negative integers
}
/// Returns the immediate next number with same count of one bits, -1 on
failure
long long hakmemItem175(long long n){
7.3 Gilbert Curve for Mo if (n == 0) return -1;
long long x = (n & -n);
long long left = (x + n);
128

long long right = ((n ^ left) / x) >> 2; #define FOR(i,a,b) for (int i=(a); i<(b); i++)
long long res = (left | right); #define FORr(i,a,b) for (int i=(a); i>=(b); i--)
return res; #define itrALL(c,itr) for(__typeof((c).begin())
} itr=(c).begin();itr!=(c).end();itr++)
#define lc ((node)<<1)
/// Returns the immediate previous number with same count of one bits, -1 #define rc ((node)<<1|1)
on failure #define VecPrnt(v) FOR(J,0,v.size()) cout<<v[J]<<" "; cout<<endl
long long lol(long long n){ #define endl "\n"
if (n == 0 || n == 1) return -1; #define PrintPair(x) cout<<x.first<<" "<<x.second<<endl
long long res = ~hakmemItem175(~n); #define EPS 1e-9
return (res == 0) ? -1 : res; #define ArrPrint(a,st,en) for(int J=st; J<=en; J++) cout<<a[J]<<" ";
} cout<<endl;

/* Direction Array */

7.5 Header // int fx[]={1,-1,0,0};


// int fy[]={0,0,1,-1};
// int fx[]={0,0,1,-1,-1,1,-1,1};
// g++ -O2 -static -std=c++11 source.cpp
// int fy[]={-1,1,0,0,1,1,-1,-1};
#pragma comment(linker, "/stack:200000000")
#pragma GCC optimize("unroll-loops")
/***************** END OF HEADER *****************/
#include <bits/stdc++.h>
int main()
{
using namespace std;
// ios_base::sync_with_stdio(0);
// cin.tie(NULL); cout.tie(NULL);
typedef long long ll;
// freopen("in.txt","r",stdin);
typedef vector <int> vi;
typedef vector <string> vs;
int test, cases = 1;
typedef pair <int, int> pii;
typedef vector<pii > vpii;
return 0;
}
#define MP make_pair
#define SORT(a) sort (a.begin(), a.end())
#define REVERSE(a) reverse (a.begin(), a.end())
#define ALL(a) a.begin(), a.end()
#define PI acos(-1)
7.6 Integral Determinant
#define ms(x,y) memset (x, y, sizeof (x))
#define inf 1e9 #include <stdio.h>
#define INF 1e16 #include <string.h>
#define pb push_back #include <stdbool.h>
#define MAX 100005
#define debug(a,b) cout<<a<<": "<<b<<endl #define MAX 1010
#define Debug cout<<"Reached here"<<endl #define clr(ar) memset(ar, 0, sizeof(ar))
#define prnt(a) cout<<a<<"\n" #define read() freopen("lol.txt", "r", stdin)
#define mod 1000000007LL
129

const long long MOD = 4517409488245517117LL; for (k = i; k < n; k++){


const long double OP = (long double)1 / 4517409488245517117LL; ar[j][k] = ar[j][k] + mul(x, ar[i][k]);
if (ar[j][k] >= MOD) ar[j][k] -= MOD;
long long mul(long long a, long long b){ }
long double res = a; }
res *= b; }
long long c = (long long)(res * OP); return counter;
a *= b; }
a -= c * MOD;
if (a >= MOD) a -= MOD; /// Finds the determinant of a square matrix
if (a < 0) a += MOD; /// Returns 0 if the matrix is singular or degenerate (hence no
return a; determinant exists)
} /// Absolute value of final answer should be < MOD / 2

long long expo(long long x, long long n){ long long determinant(int n, long long ar[MAX][MAX]){
long long res = 1; int i, j, free;
long long res = 1;
while (n){
if (n & 1) res = mul(res, x); for (i = 0; i < n; i++){
x = mul(x, x); for (j = 0; j < n; j++){
n >>= 1; if (ar[i][j] < 0) ar[i][j] += MOD;
} }
}
return res;
} free = gauss(n, ar);
if (free == -1) return 0; /// Determinant is 0 so matrix is not
int gauss(int n, long long ar[MAX][MAX]){ invertible, singular or degenerate matrix
long long x, y;
int i, j, k, l, p, counter = 0; for (i = 0; i < n; i++) res = mul(res, ar[i][i]);
if (free & 1) res = MOD - res;
for (i = 0; i < n; i++){ if ((MOD - res) < res) res -= MOD; /// Determinant can be negative so
for (p = i, j = i + 1; j < n && !ar[p][i]; j++){ if determinant is more close to MOD than 0, make it negative
p = j;
} return res;
if (!ar[p][i]) return -1; }

for (j = i; j < n; j++){ int n;


x = ar[p][j], ar[p][j] = ar[i][j], ar[i][j] = x; long long ar[MAX][MAX];
}
int main(){
if (p != i) counter++; int t, i, j, k, l;
for (j = i + 1; j < n; j++){
x = expo(ar[i][i], MOD - 2); while (scanf("%d", &n) != EOF){
x = mul(x, MOD - ar[j][i]); if (n == 0) break;
130

for (i = 0; i < n; i++){ Generate();


for (j = 0; j < n; j++){ printf("%d\n", inv[35]);
scanf("%lld", &ar[i][j]); printf("%d\n", expo(fact[35], MOD - 2));
} return 0;
} }

printf("%lld\n", determinant(n, ar));


}
return 0; 7.8 Josephus Problem
}
/// Josephus problem, n people numbered from 1 to n stand in a circle.
/// Counting starts from 1 and every k’th people dies
/// Returns the position of the m’th killed people
7.7 Inverse Modulo 1 to N (Linear) /// For example if n = 10 and k = 3, then the people killed are 3, 6, 9,
2, 7, 1, 8, 5, 10, 4 respectively
int fact[MAX], inv[MAX];
int expo(int a, int b){ /// O(n)
int res = 1; int josephus(int n, int k, int m){
int i;
while (b){ for (m = n - m, i = m + 1; i <= n; i++){
if (b & 1) res = (long long)res * a % MOD; m += k;
a = (long long)a * a % MOD; if (m >= i) m %= i;
b >>= 1; }
} return m + 1;
return res; }
}
void Generate(){ /// O(k log(n))
int i, x; long long josephus2(long long n, long long k, long long m){ /// hash =
for (fact[0] = 1, i = 1; i < MAX; i++) fact[i] = ((long long)i * 583016
fact[i - 1]) % MOD; m = n - m;
if (k <= 1) return n - m;
/// inv[i] = Inverse modulo of fact[i]
inv[MAX - 1] = expo(fact[MAX - 1], MOD - 2); long long i = m;
for (i = MAX - 2; i >= 0; i--) inv[i] = ((long long)inv[i + 1] * (i + while (i < n){
1)) % MOD; long long r = (i - m + k - 2) / (k - 1);
if ((i + r) > n) r = n - i;
/// Inverse modulo of numbers 1 to MAX in linear time below else if (!r) r = 1;
inv[1] = 1; i += r;
for (i = 2; i < MAX; i++){ m = (m + (r * k)) % i;
inv[i] = MOD - ((MOD / i) * (long long)inv[MOD % i]) % MOD; }
if (inv[i] < 0) inv[i] += MOD; return m + 1;
} }
}
int main(){
int main(){ int n, k, m;
131

printf("%d\n", josephus(10, 1, 2)); stk.push(make_pair(v[i], i));


printf("%d\n", josephus(10, 1, 10)); }
} while (stk.top().second > -1) {
res[stk.top().second] = v.size(); stk.pop();
}
}
7.9 MSB Position in O(1)

int msb(unsigned x)
{ 7.11 Next Small
union {
double a; int b[2]; #include <stdio.h>
}; #include <string.h>
a = x; #include <stdbool.h>
return (b[1] >> 20) - 1023;
} #define MAX 250010
#define clr(ar) memset(ar, 0, sizeof(ar))
#define read() freopen("lol.txt", "r", stdin)
7.10 Nearest Smaller Values on Left-Right int ar[MAX], L[MAX], R[MAX], stack[MAX], time[MAX];

// Linear time all nearest smaller values, standard stack-based algorithm. void next_small(int n, int* ar, int* L){
// ansv_left stores indices of nearest smaller values to the left in res. int i, j, k, l, x, top = 0;
-1 means no smaller value was found.
// ansv_right likewise looks to the right. v.size() means no smaller for (i = 0; i < n; i++){
value was found. x = ar[i];
void ansv_left(vector<int>& v, vector<int>& res) { if (top && stack[top] >= x){
stack<pair<int, int> > stk; stk.push(make_pair(INT_MIN, v.size())); while (top && stack[top] >= x) k = time[top--];
for (int i = v.size()-1; i >= 0; i--) { L[i] = (i - k + 2);
while (stk.top().first > v[i]) { stack[++top] = x;
res[stk.top().second] = i; stk.pop(); time[top] = k;
} }
stk.push(make_pair(v[i], i)); else{
} L[i] = 1;
while (stk.top().second < v.size()) { stack[++top] = x;
res[stk.top().second] = -1; stk.pop(); time[top] = i + 1;
} }
} }
}
void ansv_right(vector<int>& v, vector<int>& res) { /*** L[i] contains maximum length of the range from i to the left such
stack<pair<int, int> > stk; stk.push(make_pair(INT_MIN, -1)); that the minimum of this range
for (int i = 0; i < v.size(); i++) { is not less than ar[i].
while (stk.top().first > v[i]) { Similarly, R[i] contains maximum length of the range from i to the
res[stk.top().second] = i; stk.pop(); right such that the minimum
} of this range is not less than ar[i]
132

For example, ar[] = 5 3 4 3 1 2 6 // Seeding non-deterministically


L[] = 1 2 1 4 5 1 1 mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
R[] = 1 3 1 1 3 2 1
***/ random_device rd;
mt19937 mt(rd());
void fill(int n, int* ar, int* L, int* R){ uniform_real_distribution<double> r1(1.0, 10.0);
int i, j, k; uniform_int_distribution<int> r2(1,INT_MAX);
for (i = 0; i < n; i++) L[i] = ar[n - i - 1]; normal_distribution<double> r3(1.0,10.0);
exponential_distribution<> r4(5);
next_small(n, L, R);
next_small(n, ar, L); int main()
{
i = 0, j = n - 1; cout<<rng()<<endl;
while (i < j){ cout<<r1(mt)<<endl;
k = R[i], R[i] = R[j], R[j] = k; cout<<r2(mt)<<endl;
i++, j--; cout<<r4(mt)<<endl;
} return 0;
} }

int main(){
int n, i, j, k;
7.13 Russian Peasant Multiplication
scanf("%d", &n);
for (i = 0; i < n; i++) scanf("%d", &ar[i]);
// calculate (a*b)%m
// Particularly useful when a, b, m all are large like 1e18
fill(n, ar, L, R);
ll RussianPeasantMultiplication(ll a, ll b, ll m)
for (i = 0; i < n; i++) printf("%d ", ar[i]);
{
puts("");
ll ret=0;
for (i = 0; i < n; i++) printf("%d ", R[i]);
puts("");
while(b)
for (i = 0; i < n; i++) printf("%d ", L[i]);
{
puts("");
if(b&1)
return 0;
{
}
ret+=a;
if(ret>=m) ret-=m;
}
7.12 Random Number Generation a=(a<<1);

#include <bits/stdc++.h> if(a>=m) a-=m;


#include <random>
#include <chrono> b>>=1;
}
using namespace std;
133

return ret; equation(long double l, long double p, long double r, long double rhs
} = 0.0):
l(l), p(p), r(r), rhs(rhs){}
};
/// Thomas algorithm to solve tri-digonal system of equations in O(n)
7.14 Stable Marriage Problem vector <long double> thomas_algorithm(int n, vector <struct equation> ar){
ar[0].r = ar[0].r / ar[0].p;
// Gale-Shapley algorithm for the stable marriage problem. ar[0].rhs = ar[0].rhs / ar[0].p;
// madj[i][j] is the jth highest ranked woman for man i. for (int i = 1; i < n; i++){
// fpref[i][j] is the rank woman i assigns to man j. long double v = 1.0 / (ar[i].p - ar[i].l * ar[i - 1].r);
// Returns a pair of vectors (mpart, fpart), where mpart[i] gives ar[i].r = ar[i].r * v;
// the partner of man i, and fpart is analogous ar[i].rhs = (ar[i].rhs - ar[i].l * ar[i - 1].rhs) * v;
pair<vector<int>, vector<int> > stable_marriage(vector<vector<int> >& }
madj, vector<vector<int> >& fpref) { for (int i = n - 2; i >= 0; i--) ar[i].rhs = ar[i].rhs - ar[i].r *
int n = madj.size(); ar[i + 1].rhs;
vector<int> mpart(n, -1), fpart(n, -1); vector <long double> res;
vector<int> midx(n); for (int i = 0; i < n; i++) res.push_back(ar[i].rhs);
queue<int> mfree; return res;
for (int i = 0; i < n; i++) { }
mfree.push(i);
}
while (!mfree.empty()) {
int m = mfree.front(); mfree.pop(); 7.16 U128
int f = madj[m][midx[m]++];
if (fpart[f] == -1) { #include <bits/stdtr1c++.h>
mpart[m] = f; fpart[f] = m;
} else if (fpref[f][m] < fpref[f][fpart[f]]) { using namespace std;
mpart[fpart[f]] = -1; mfree.push(fpart[f]);
mpart[m] = f; fpart[f] = m; typedef unsigned long long int U64;
} else {
mfree.push(m); struct U128{
} U64 lo, hi;
} static const U64 bmax = -1;
return make_pair(mpart, fpart); static const size_t sz = 128;
} static const size_t hsz = 64;

inline U128() : lo(0), hi(0) {}


inline U128(unsigned long long v) : lo(v), hi(0) {}
7.15 Thomas Algorithm
inline U128 operator-() const {
/// Equation of the form: (x_prev * l) + (x_cur * p) + (x_next * r) = rhs return ~U128(*this) + 1;
struct equation{ }
long double l, p, r, rhs;
inline U128 operator~() const {
equation(){} U128 t(*this);
134

t.lo = ~t.lo;
t.hi = ~t.hi; inline static void divide(const U128 &num, const U128 &den, U128
return t; &quo, U128 &rem) {
} if(den == 0) {
int a = 0;
inline U128 &operator +=(const U128 &b) { quo = U128(a / a);
if (lo > bmax - b.lo) ++hi; }
lo += b.lo; U128 n = num, d = den, x = 1, ans = 0;
hi += b.hi;
return *this; while((n >= d) && (((d >> (sz - 1)) & 1) == 0)) {
} x <<= 1;
d <<= 1;
inline U128 &operator -= (const U128 &b){ }
return *this += -b;
} while(x != 0) {
if(n >= d) {
inline U128 &operator *= (const U128 &b) { n -= d;
if (*this == 0 || b == 1) return *this; ans |= x;
if (b == 0){ }
lo = hi = 0; x >>= 1, d >>= 1;
return *this; }
} quo = ans, rem = n;
}
U128 a(*this);
U128 t = b; inline U128 &operator&=(const U128 &b) {
lo = hi = 0; hi &= b.hi;
lo &= b.lo;
for (size_t i = 0; i < sz; i++) { return *this;
if((t & 1) != 0) *this += (a << i); }
t >>= 1;
} inline U128 &operator|=(const U128 &b) {
return *this; hi |= b.hi;
} lo |= b.lo;
return *this;
inline U128 &operator /= (const U128 &b) { }
U128 rem;
divide(*this, b, *this, rem); inline U128 &operator<<=(const U128& rhs) {
return *this; size_t n = rhs.to_int();
} if (n >= sz) {
lo = hi = 0;
inline U128 &operator %= (const U128 &b) { return *this;
U128 quo; }
divide(*this, b, quo, *this);
return *this; if(n >= hsz) {
} n -= hsz;
135

hi = lo; }
lo = 0;
} inline bool operator < (const U128 &b) const {
return (hi == b.hi) ? lo < b.lo : hi < b.hi;
if(n != 0) { }
hi <<= n;
const U64 mask(~(U64(-1) >> n)); inline bool operator >= (const U128 &b) const {
hi |= (lo & mask) >> (hsz - n); return ! (*this < b);
lo <<= n; }
}
return *this; inline U128 operator & (const U128 &b) const {
} U128 a(*this); return a &= b;
}
inline U128 &operator>>=(const U128& rhs) {
size_t n = rhs.to_int(); inline U128 operator << (const U128 &b) const {
if (n >= sz) { U128 a(*this); return a <<= b;
lo = hi = 0; }
return *this;
} inline U128 operator >> (const U128 &b) const {
U128 a(*this); return a >>= b;
if(n >= hsz) { }
n -= hsz;
lo = hi; inline U128 operator * (const U128 &b) const {
hi = 0; U128 a(*this); return a *= b;
} }

if(n != 0) { inline U128 operator + (const U128 &b) const {


lo >>= n; U128 a(*this); return a += b;
const U64 mask(~(U64(-1) << n)); }
lo |= (hi & mask) << (hsz - n);
hi >>= n; inline U128 operator - (const U128 &b) const {
} U128 a(*this); return a -= b;
}
return *this;
} inline U128 operator % (const U128 &b) const {
U128 a(*this); return a %= b;
inline int to_int() const { return static_cast<int> (lo); } }
inline U64 to_U64() const { return lo; }
inline void print(){
inline bool operator == (const U128 &b) const { U128 x = *this;
return hi == b.hi && lo == b.lo; char str[128];
} int i, j, len = 0;

inline bool operator != (const U128 &b) const { do{


return !(*this == b); str[len++] = (x % 10).lo + 48;
136

x /= 10; simplify();
} while (x != 0); }

reverse(str, str + len); inline void simplify() {


str[len] = 0; U128 g = gcd(p, q);
puts(str); p /= g;
} q /= g;
}; }

inline U128 gcd(U128 a, U128 b){ inline Rational operator+ (const Rational &f) const {
if (b == 0) return a; return Rational(p * f.q + q * f.p, q * f.q);
return gcd(b, a % b); }
} inline Rational operator- (const Rational &f) const {
return Rational(p * f.q - q * f.p, q * f.q);
inline U128 expo(U128 b, U128 e){ }
U128 res = 1; inline Rational operator* (const Rational &f) const {
while (e != 0){ return Rational(p * f.p, q * f.q);
if ((e & 1) != 0) res *= b; }
e >>= 1, b *= b; inline Rational operator/ (const Rational &f) const {
} return Rational(p * f.q, q * f.p);
return res; }
} };

inline U128 expo(U128 x, U128 n, U128 m){ int main(){


U128 res = U128(1); U128 X = U128(9178291938173ULL);
while (n != 0){ U128 Y = U128(123456789123456ULL);
if ((n & 1) != 0){ U128 M = U128(10000000000000000000ULL);
res *= n;
res %= m; for (int i = 0; i < 10000; i++){
} U128 R = expo(X, Y, M);
x *= x; }
x %= m; return 0;
n >>= 1; }
}
return res % m;
}
7.17 Useful Templates
struct Rational{
U128 p, q;
template <class T> inline T bigmod(T p, T e, T M)
{
inline Rational(){
ll ret = 1;
p = 0, q = 1;
for (; e > 0; e >>= 1)
}
{
if (e & 1) ret = (ret * p) % M;
inline Rational(U128 P, U128 Q) : p(P), q(Q){
p = (p * p) % M;
137

} return (T)ret; string s;


} cin >> s;
ll fst = (s[0] == ’-’) ? 1 : 0;
template <class T> inline T gcd(T a, T b) {if (b == 0)return a; return __int128 v = 0;
gcd(b, a % b);} f(i,fst,s.size()) v = v * 10 + s[i] - ’0’;
template <class T> inline T modinverse(T a, T M) {return bigmod(a, M - 2, if(fst) v = -v;
M);} return v;
template <class T> inline T lcm(T a, T b) {a = abs(a); b = abs(b); return }
(a / gcd(a, b)) * b;}
template <class T, class X> inline bool getbit(T a, X i) { T t = 1; ostream& operator << (ostream& os,const __int128& v) {
return ((a & (t << i)) > 0);} string ret, sgn;
template <class T, class X> inline T setbit(T a, X i) { T t = 1; return __int128 n = v;
(a | (t << i)); } if(v < 0) sgn = "-", n = -v;
template <class T, class X> inline T resetbit(T a, X i) { T t = 1; return while(n) ret.pb(n % 10 + ’0’), n /= 10;
(a & (~(t << i)));} reverse(all(ret));
ret = sgn + ret;
inline ll getnum() os << ret;
{ return os;
char c = getchar(); }
ll num, sign = 1;
for (; c < ’0’ || c > ’9’; c = getchar())if (c == ’-’)sign = -1; int main(){
for (num = 0; c >= ’0’ && c <= ’9’;) __int128 n = input();
{ cout << n << endl;
c -= ’0’; }
num = num * 10 + c;
c = getchar();
}
return num * sign; 8 Notes
}

inline ll power(ll a, ll b) 9 String


{
ll multiply = 1; 9.1 A KMP Application
FOR(i, 0, b)
{
/* You are given a text t and a pattern p. For each index of t, find
multiply *= a;
how many proper prefixes of p ends in this position. Similarly, find how
}
many proper
return multiply;
suffixes start from this position.
}
While calculating the failure function, we can find for each position of
the pattern p
how many of its own prefixes end in that position. After calculating that
in dp[i],
7.18 int128 we can just feel table[i] for text t.
*/
__int128 input(){
138

int pi[N], dp[N]; 9.2 Aho Corasick 2


void prefixFun(string &p)
{
int n;
int now;
string s, p[MAX];
pi[0]=now=-1;
map<char, int> node[MAX];
int root, nnode, link[MAX];
dp[0]=1; // 0th character is a prefix ending in itself, base case
vi ending[MAX], exist[MAX];
// exist[i] has all the ending occurrences of the input strings
for(int i=1; i<p.size(); i++)
void insertword(int idx)
{
{
while(now!=-1 && p[now+1]!=p[i])
int len = p[idx].size();
now=pi[now];
int now = root;
FOR(i, 0, len)
if(p[now+1]==p[i]) pi[i]=++now;
{
else pi[i]=now=-1;
if (!node[now][p[idx][i]])
{
if(pi[i]!=-1) // calculate the # of prefixes end in this
node[now][p[idx][i]] = ++nnode;
position of p
node[nnode].clear();
dp[i]=dp[pi[i]]+1;
}
else dp[i]=1;
now = node[now][p[idx][i]];
}
}
}
// which strings end in node number ’now’?
ending[now].pb(idx);
int kmpMatch(string &p, string &t, int *table)
}
{
void populate(int curr)
int now=-1;
{
FOR(i,0,t.size())
// Because ’suffix-links’. It links a node with the longest proper suffix
{
for (auto it : ending[link[curr]])
while(now!=-1 && p[now+1]!=t[i])
ending[curr].pb(it);
now=pi[now];
}
if(p[now+1]==t[i])
void populate(vi &curr, int idx)
{
{
++now;
// So word number it ends in idx-th character of the text
table[i]=dp[now]; // table for text t
for (auto it : curr)
}
{
else now=-1;
exist[it].pb(idx);
if(now+1==p.size())
}
{
}
now=pi[now];
void push_links()
}
{
}
queue<int>q;
}
link[0] = -1;
q.push(0);
while (!q.empty())
{
139

int u = q.front(); // The solution is to build a graph where vertices denote indices of
q.pop(); strings and an edge
itrALL(node[u], it) // from u to v denotes that string[u] occurs in string[v].
{
char ch = it->first; #define ALPHABET_SIZE 26
int v = it->second; #define MAX_NODE 1e6
int j = link[u]; int n; // number of strings
// use map.find() string in[N], p;
while (j != -1 && !node[j][ch])
j = link[j]; int node[MAX_NODE][ALPHABET_SIZE];
if (j != -1)link[v] = node[j][ch]; int root, nnode, link[MAX_NODE], termlink[MAX_NODE], terminal[MAX_NODE];
else link[v] = 0; bool graph[N][N];
q.push(v);
populate(v); // termlink[u] = a link from node u to a node which is a terminal node
} // terminal node is a node where an ending of an input string occurs
} // terminal[node] = the index of the string which ends in node
}
/* Solution:
void traverse() // For every node of the Aho-Corasick structure find and remember the
{ nearest terminal node (termlink[u]) in the suffix-link path; Once again
int len = s.size(); traverse
int now = root; all strings through Aho-Corasick. Every time new symbol is added, add an
FOR(i, 0, len) arc from the node
{ corresponding to the current string (in the graph we build, not
// use map.find() Aho-Corasick) to
while (now != -1 && !node[now][s[i]]) the node of the graph corresponding to the nearest terminal in the
now = link[now]; suffix-link path;
if (now != -1) now = node[now][s[i]]; The previous step will build all essential arcs plus
else now = 0; some other arcs, but they do not affect the next step in any way;
populate(ending[now], i); Find the transitive closure of the graph.
} */
}
void init()
{
root=0;
nnode=0;
9.3 Aho Corasick Occurrence Relation ms(terminal,-1);
ms(termlink,-1);
}
// Suppose we have n<=1000 strings. Total summation of the length of
these strings
void insertword(int idx)
// can be 1e7. Now we are given queries. In each query, we are given
{
indices of
p=in[idx];
// two strings and asked if one of them occurs in another as a substring.
int len=p.size();
// We need to find this relation efficiently. We will use Aho-Corasick.
int now=root;
140

q.push(v);
FOR(i,0,len) }
{ }
int x=p[i]-’a’; }

if(!node[now][x]) void buildgraph()


{ {
node[now][x]=++nnode; FOR(i,0,n)
} {
int curr=root;
now=node[now][x];
} FOR(j,0,in[i].size())
{
terminal[now]=idx; // string with index idx ends in now char ch=in[i][j];
} curr=node[curr][(int)ch-’a’];

void push_links() int st=curr;


{ if(terminal[st]==-1) st=termlink[st];
queue<int>q;
link[0]=-1; for(int k=st; k>=0; k=termlink[k])
q.push(0); {
if(terminal[k]==i) continue;
while(!q.empty()) if(graph[i][terminal[k]]) break;
{ graph[i][terminal[k]]=true;
int u=q.front();
q.pop(); // cout<<"edge: "<<i<<" "<<terminal[k]<<endl;
}
for(int i=0; i<ALPHABET_SIZE; i++) }
{ }
if(!node[u][i]) continue; }

int v=node[u][i]; // Finally, find transitive closure of the graph. If O(n^3) is possible,
int j=link[u]; we can use
// use map.find() // Floyd-Warshall. Otherwise, run dfs from each node and add an edge from
while(j!=-1 && !node[j][i]) current starting
j=link[j]; // node to each reachable node. An edge in this transitive closure
denotes the occurrence relation.
if(j!=-1) link[v]=node[j][i];
else link[v]=0;

// Finding nearest terminal nodes 9.4 Aho Corasick


if(terminal[link[v]]!=-1)
termlink[v]=link[v];
int n; // n is the number of dictionary word
else termlink[v]=termlink[link[v]];
string s,p; // dictionary words are inputted in p, s is the traversed text
141

#define MAX_NODE 250004 {


int u=q.front();
map<char,int> node[MAX_NODE]; // use 2d array maybe? q.pop();
int root, nnode, link[MAX_NODE], endof[504], travis[MAX_NODE];
pii level[MAX_NODE]; itrALL(node[u],it)
{
void init() char ch=it->first;
{ int v=it->second;
root=0; int j=link[u];
nnode=0;
travis[root]=0; // number of time a node is traversed by s // use map.find()
level[root]=MP(0,root); // level, node while(j!=-1 && !node[j][ch])
node[root].clear(); j=link[j];
}
if(j!=-1) link[v]=node[j][ch];
void insertword(int idx) else link[v]=0;
{
int len=p.size(); q.push(v);
int now=root; }
}
FOR(i,0,len) }
{
// use map.find() void traverse()
if(!node[now][p[i]]) {
{ int len=s.size();
node[now][p[i]]=++nnode; int now=root;
node[nnode].clear();
travis[root]++;
travis[nnode]=0;
level[nnode]=MP(level[now].first+1,nnode); FOR(i,0,len)
} {
// use map.find()
now=node[now][p[i]]; while(now!=-1 && !node[now][s[i]])
} now=link[now];

endof[idx]=now; // end of dictionary word idx if(now!=-1) now=node[now][s[i]];


} else now=0;

void push_links() travis[now]++;


{ }
queue<int>q;
link[0]=-1; sort(level,level+nnode+1,greater<pii>());
q.push(0);
FOR(i,0,nnode+1)
while(!q.empty()) {
142

now=level[i].second; }
travis[link[now]]+=travis[now];
} struct Hash {
} ll h1[MAX], h2[MAX];
int n; // length of s
void driver()
{ Hash(char *s, int n): n(n) {
init(); ll th1 = 0, th2 = 0;
FOR(i,0,n) FOR(i, 0, n) {
{ th1 = (th1 + s[i] * pwr1[i]) % mod1;
// input p th2 = (th2 + s[i] * pwr2[i]) % mod2;
insertword(i); h1[i] = th1;
} h2[i] = th2;
// input s }
push_links(); }
traverse(); Hash() {}
// number of occurence of word i in s is travis[endof[i]] pair<ll, ll> getHash(ll i, ll j) {
}
if(i>j) return {0,0};

ll ret1, ret2;
9.5 Double Hash if (!i) {
ret1 = h1[j];
ret2 = h2[j];
const int p1 = 7919;
}
const int mod1 = 1000004249;
else {
const int p2 = 2203;
// Note: may need to do modinverse
const int mod2 = 1000000289;
// in that case, precalc inv1[] and inv2[]
ret1 = (h1[j] - h1[i - 1]) % mod1;
ll pwr1[MAX+7], pwr2[MAX+7];
if (ret1 < 0) ret1 += mod1;
ret2 = (h2[j] - h2[i - 1]) % mod2;
void precalc()
if (ret2 < 0) ret2 += mod2;
{
}
ll pw1 = 1, pw2 = 1;
return MP(ret1, ret2);
}
FOR(i,0,MAX)
};
{
pwr1[i] = pw1;
pwr2[i] = pw2;

pw1 = (pw1 * p1) % mod1;


9.6 Dynamic Aho Corasick Sample
pw2 = (pw2 * p2) % mod2;
} /* Problem: We have three types of queries: add a string to our
dictionary,
pwr1[MAX] = pw1; delete an existing string from our dictionary, and for the given string s
pwr2[MAX] = pw2; find
143

the number of occurrences of the strings from the dictionary. If some // insert a word in automaton number ’id’.
string p void insertword(string &s, int val)
from dictionary has several occurrences in s, we should count all of them. {
int len = s.size();
Solution: If we have N strings in the dictionary, maintain log(N) Aho int now = root;
Corasick
automata. The i-th automata contains the first 2^k strings not included FOR(i,0,len)
in the {
previous automata. For example, if we have N = 19, we need 3 automata: int nxt = s[i]-’a’;
{s[1]...s[16]},
{s[17]...s[18]}, and {s[19]}. To answer the query, we can traverse the if(!node[id][now][nxt])
logN automata {
using the given query string. node[id][now][nxt] = ++nnode;
ms(node[id][nnode],0);
To handle addition, first construct an automata using the single string, cnt[nnode]=0;
and then }
while there are two automatons with the same number of strings, we merge now=node[id][now][nxt];
them by }
constructing a new automaton using brute force. cnt[now]+=val;
Complexity becomes // an occurrence of a string happened in ’now’
O(total_length_of_all_string*log(number_of_insert_operations)). // if val=-1, it means an occurrence is removed
}
To handle deletion, we just insert with a value -1 to store in endpoints void insertdict(vector<string> &dict, vector<int> &vals)
of each {
added string. // dict is the dictionary for current automaton
*/ // and vals can be 1 or -1 denoting addition or deletion of
const int N = 3e5+7; // maximum number of nodes corresponding string
const int ALPHA = 26; // alphabate size FOR(i,0,dict.size()) insertword(dict[i],vals[i]);
int node[20][N][ALPHA]; // stores nodes for id-th automaton }
struct ahoCorasick void pushLinks()
{ {
int root, nnode; queue<int> Q; link[root]=-1;
int link[N], cnt[N], id; Q.push(root);
bool dead; while(!Q.empty())
{
void init(int idx) int u = Q.front(); Q.pop();
{
dead = false; id = idx; root = 0; nnode = 0; for(int i=0; i<ALPHA; i++)
ms(node[id][root],0); {
FOR(i,0,nnode+1) cnt[i] = 0; if(!node[id][u][i]) continue;
} int v = node[id][u][i];
void clear() int l = link[u];
{
dead = true; while(l!=-1 && !node[id][l][i]) l = link[l];
} if(l!=-1) link[v] = node[id][l][i];
144

else link[v] = 0; aho[i].pushLinks();


cnt[v]+=cnt[link[v]]; break;
Q.push(v); }
} }
} }
}
// Returns how many occurrences of dictionary are there in query int n, t;
string p char in[N];
int query(string &p) void Test()
{ {
int u = root, ret = 0; ahoCorasick aho;
for(char ch: p) aho.init(0);
{ string s = "o"; aho.insertword(s,1);
int nxt = ch-’a’; s = "m"; aho.insertword(s,1);
while(u!=-1 && !node[id][u][nxt]) u = link[u]; s = "h"; aho.insertword(s,1);
if(u==-1) u = 0; aho.pushLinks();
else u = node[id][u][nxt]; s = "moohh";
ret+=cnt[u]; prnt(aho.query(s));
} }
return ret; int main()
} {
} aho[20]; // Test();
FOR(i,0,20) aho[i].clear();
vector<string> dict[20]; scanf("%d", &n);
vector<int> vals[20]; while(n--)
// handles addition and deletion dynamically {
void add(string &s, int val) scanf("%d%s", &t, in);
{ string x = in;
dict[0].pb(s); if(t==1) add(x,1);
vals[0].pb(val); else if(t==2) add(x,-1)
for(int i=0; i<20; i++) else
{ {
if(dict[i].size()>(1<<i)) // merging two automata ll ans = 0;
{ FOR(i,0,20) if(!aho[i].dead) ans+=aho[i].query(x);
for(auto it: dict[i]) dict[i+1].pb(it); printf("%lld\n", ans);
for(auto it: vals[i]) vals[i+1].pb(it); fflush(stdout); // needed because the problem forced
dict[i].clear(); online solution
vals[i].clear(); }
aho[i].clear(); // i-th automata is not relevant }
anymore return 0;
} }
else
{
aho[i].init(i);
aho[i].insertdict(dict[i],vals[i]); 9.7 Dynamic Aho Corasick
145

inline void insert(const char* str){


#include <bits/stdtr1c++.h> int j, x, cur = 0;
for (j = 0; str[j] != 0; j++){
#define LOG 19 x = edge[str[j]];
#define LET 26 if (!trie[cur].count(x)){
#define MAX 300010 int next_node = node();
#define clr(ar) memset(ar, 0, sizeof(ar)) trie[cur][x] = next_node;
#define read() freopen("lol.txt", "r", stdin) }
#define dbg(x) cout << #x << " = " << x << endl cur = trie[cur][x];
}
using namespace std;
leaf[cur]++;
struct aho_corasick{ dictionary.push_back(str);
int id, edge[256]; }
vector <long long> counter;
vector <string> dictionary; inline void build(){ /// remember to call build
vector <map<char, int> > trie; vector <pair<int, pair<int, int> > > Q;
vector <int> leaf, fail, dp[LET]; fail.resize(id, 0);
Q.push_back({0, {0, 0}});
inline int node(){
leaf.push_back(0); for (int i = 0; i < LET; i++) dp[i].resize(id, 0);
counter.push_back(0); for (int i = 0; i < id; i++){
trie.push_back(map<char, int>()); for (int j = 0; j < LET; j++){
return id++; dp[j][i] = i;
} }
}
inline int size(){
return dictionary.size(); for(int i = 0; i < Q.size(); i++){
} int u = Q[i].first;
int p = Q[i].second.first;
void clear(){ char c = Q[i].second.second;
trie.clear(); for(auto& it: trie[u]) Q.push_back({it.second, {u, it.first}});
dictionary.clear();
fail.clear(), leaf.clear(), counter.clear(); if (u){
for (int i = 0; i < LET; i++) dp[i].clear(); int f = fail[p];
while (f && !trie[f].count(c)) f = fail[f];
id = 0, node(); if(!trie[f].count(c) || trie[f][c] == u) fail[u] = 0;
for (int i = ’a’; i <= ’z’; i++) edge[i] = i - ’a’; /// change else fail[u] = trie[f][c];
here if different character set counter[u] = leaf[u] + counter[fail[u]];
}
for (int j = 0; j < LET; j++){
aho_corasick(){ if (u && !trie[u].count(j)) dp[j][u] = dp[j][fail[u]];
clear(); }
} }
}
146

} long long res = 0;


for (int i = 0; i < LOG; i++) res += ar[i].count(str);
inline int next(int cur, char ch){ return res;
int x = edge[ch]; }
cur = dp[x][cur]; };
if (trie[cur].count(x)) cur = trie[cur][x];
return cur; char str[MAX];
}
int main(){
long long count(const char* str){ /// total number of occurrences of dynamic_aho ar[2];
all words from dictionary in str int t, i, j, k, l, flag;
int cur = 0;
long long res = 0; scanf("%d", &t);
while (t--){
for (int j = 0; str[j] && id > 1; j++){ /// id > 1 because build scanf("%d %s", &flag, str);
will not be called if empty dictionary in dynamic aho corasick if (flag == 3){
cur = next(cur, str[j]); printf("%lld\n", ar[0].count(str) - ar[1].count(str));
res += counter[cur]; fflush(stdout);
} }
return res; else ar[flag - 1].insert(str);
} }
}; return 0;
}
struct dynamic_aho{ /// dynamic aho corasick in N log N
aho_corasick ar[LOG];

dynamic_aho(){ 9.8 KMP 2


for (int i = 0; i < LOG; i++) ar[i].clear();
}
char text[MAX], patt[MAX];
int pi[MAX], n, m;
inline void insert(const char* str){
int i, k = 0;
void Process()
for (k = 0; k < LOG && ar[k].size(); k++){}
{
int now=-1;
ar[k].insert(str);
pi[0]=-1;
for (i = 0; i < k; i++){
for (auto s: ar[i].dictionary){
for(int i=1; i<m; i++)
ar[k].insert(s.c_str());
{
}
while(now!=-1 && patt[now+1]!=patt[i])
ar[i].clear();
now=pi[now];
}
if(patt[now+1]==patt[i]) pi[i]=++now;
ar[k].build();
else pi[i]=now=-1;
}
}
}
long long count(const char* str){
147

void Search() int now;


{ pi[0]=now=-1;
int now=-1;
for(int i=1; i<p.size(); i++)
for(int i=0; i<n; i++) {
{ while(now!=-1 && p[now+1]!=p[i])
while(now!=-1 && patt[now+1]!=text[i]) now=pi[now];
now=pi[now];
if(patt[now+1]==text[i]) ++now; if(p[now+1]==p[i]) pi[i]=++now;
else now=-1; else pi[i]=now=-1;
if(now==m-1) }
{ }
cout<<"match at "<<i-now<<endl;
now=pi[now]; // match again int kmpMatch()
} {
} int now=-1;
} FOR(i,0,t.size())
{
int main() cout<<"now: "<<i<<" "<<now<<endl;
{ while(now!=-1 && p[now+1]!=t[i])
// ios_base::sync_with_stdio(0); now=pi[now];
// cin.tie(NULL); cout.tie(NULL); if(p[now+1]==t[i])
// freopen("in.txt","r",stdin); {
++now;
cin>>text>>patt; cnt[now]++;
}
n=strlen(text); m=strlen(patt); else now=-1;
if(now+1==p.size())
Process(); {
Search(); // match found
// cout<<"match and setting "<<now<<" to
// FOR(i,0,m) cout<<pi[i]<<" "; cout<<endl; "<<pi[now]<<endl;
now=pi[now]; // match again
return 0; }
} }
}

int main()
9.9 KMP 3 {
// ios_base::sync_with_stdio(0);
// cin.tie(NULL); cout.tie(NULL);
string p, t;
// freopen("in.txt","r",stdin);
int pi[MAX], cnt[MAX];
cin>>t>>p;
void prefixFun()
{
148

prefixFun(); char str[100];


FOR(i,0,p.size()) cout<<pi[i]<<" "; cout<<endl; while (scanf("%s", str)){
prnt(kmpMatch()); auto v = manacher(str);
FOR(i,0,p.size()) cout<<cnt[i]<<" "; cout<<endl; for (auto it: v) printf("%d ", it);
FORr(i,p.size()-1,0) puts("");
{ }
if(pi[i]==-1) continue; return 0;
cnt[pi[i]]+=cnt[i]; }
}
FOR(i,0,p.size()) cout<<cnt[i]<<" "; cout<<endl;

return 0; 9.11 Minimum Lexicographic Rotation


}
#include <stdio.h>
#include <string.h>
#include <stdbool.h>
9.10 Manacher-s Algorithm
#define clr(ar) memset(ar, 0, sizeof(ar))
#include <bits/stdtr1c++.h> #define read() freopen("lol.txt", "r", stdin)
using namespace std;
/*** Manacher’s algorithm to generate longest palindromic substrings for /// Lexicographically Minimum String Rotation
all centers ***/ int minlex(char* str){ /// Returns the 0-based index
/// When i is even, pal[i] = largest palindromic substring centered from int i, j, k, n, len, x, y;
str[i / 2] len = n = strlen(str), n <<= 1, i = 0, j = 1, k = 0;
/// When i is odd, pal[i] = largest palindromic substring centered
between str[i / 2] and str[i / 2] + 1 while((i + k) < n && (j + k) < n) {
x = i + k >= len ? str[i + k - len] : str[i + k];
vector <int> manacher(char *str){ /// hash = 784265 y = j + k >= len ? str[j + k - len] : str[j + k];
int i, j, k, l = strlen(str), n = l << 1; if(x == y) k++;
vector <int> pal(n); else if (x < y){
j += ++k, k = 0;
for (i = 0, j = 0, k = 0; i < n; j = max(0, j - k), i += k){ if (i >= j) j = i + 1;
while (j <= i && (i + j + 1) < n && str[(i - j) >> 1] == str[(i + }
j + 1) >> 1]) j++; else{
for (k = 1, pal[i] = j; k <= i && k <= pal[i] && (pal[i] - k) != i += ++k, k = 0;
pal[i - k]; k++){ if (j >= i) i = j + 1;
pal[i + k] = min(pal[i - k], pal[i] - k); }
} }
}
return (i < j) ? i : j;
pal.pop_back(); }
return pal;
} int t;
char str[50010];
int main(){
149

int main(){ /// v[i] is the minimum k so that the prefix string str[0:i] can be
gets(str); partitioned into k disjoint palindromes
while (gets(str)){ inline vector <int> factorize(const char* str){
printf("%d\n", minlex(str)); int g[32][3], gp[32][3], gpp[32][3];
} int i, j, k, l, d, u, r, x, pg = 0, pgp = 0, pgpp = 0, n =
return 0; strlen(str);
}
clr(g), clr(gp), clr(gpp);
for (int i = 0; i < n; i++) gpl[i][0] = MAX, gpl[i][1] = MAX + 1;

9.12 Palindrome Factorization for (j = 0; j < n; j++){


for (u = 0, pgp = 0; u < pg; u++){
i = g[u][0];
#include <bits/stdtr1c++.h>
if ((i - 1) >= 0 && str[i - 1] == str[j]){
g[u][0]--;
#define MAX 1000010
copy(gp[pgp++], g[u]);
#define clr(ar) memset(ar, 0, sizeof(ar))
}
#define read() freopen("lol.txt", "r", stdin)
}
#define dbg(x) cout << #x << " = " << x << endl
#define ran(a, b) ((((rand() << 15) ^ rand()) % ((b) - (a) + 1)) + (a))
pgpp = 0, r = -(j + 2);
for (u = 0; u < pgp; u++){
using namespace std;
i = gp[u][0], d = gp[u][1], k = gp[u][2];
if ((i - r) != d){
/// Minimum palindromic factorization for all prefixes in O(N log N)
set(gpp[pgpp++], i, i - r, 1);
if (k > 1) set(gpp[pgpp++], i + d, d, k - 1);
namespace pal{
}
int pl[MAX][2], gpl[MAX][2];
else set(gpp[pgpp++], i, d, k);
r = i + (k - 1) * d;
inline void set(int *ar, int x, int y, int z) {
}
ar[0] = x, ar[1] = y, ar[2] = z;
}
if (j - 1 >= 0 && str[j - 1] == str[j]){
set(gpp[pgpp++], j - 1, j - 1 - r, 1);
inline void set(int ar[][2], int i, int v) {
r = j - 1;
if (v > 0) ar[i][v & 1] = v;
}
}
set(gpp[pgpp++], j, j - r, 1);
inline void copy(int *A, int *B) {
int *ptr = gpp[0];
A[0] = B[0], A[1] = B[1], A[2] = B[2];
for (u = 1, pg = 0; u < pgpp; u++){
}
int *x = gpp[u];
if (x[1] == ptr[1]) ptr[2] += x[2];
inline void update(int ar[][2], int i, int v) {
else {
if (v > 0 && (ar[i][v & 1] == -1 || ar[i][v & 1] > v)) ar[i][v &
copy(g[pg++], ptr);
1] = v;
ptr = x;
}
}
}
/// Returns a vector v such that,
150

copy(g[pg++], ptr); int s[MAX], Link[MAX], Len[MAX], Edge[MAX][26];


int node, lastPal, n;
pl[j + 1][(j & 1) ^ 1] = j + 1; ll cnt[MAX];
pl[j + 1][j & 1] = MAX + (j & 1);
for (u = 0; u < pg; u++){ void init()
i = g[u][0], d = g[u][1], k = g[u][2]; {
r = i + (k - 1) * d; s[n++]=-1;
Link[0]=1; Len[0]=0;
update(pl, j + 1, pl[r][0] + 1); Link[1]=1; Len[1]=-1;
update(pl, j + 1, pl[r][1] + 1); node=2;
if (k > 1) { }
update(pl, j + 1, gpl[i + 1 - d][0]);
update(pl, j + 1, gpl[i + 1 - d][1]); int getLink(int v)
} {
while(s[n-Len[v]-2]!=s[n-1]) v=Link[v];
if (i + 1 >= d) { return v;
if (k > 1) { }
update(gpl, i + 1 - d, pl[r][0] + 1);
update(gpl, i + 1 - d, pl[r][1] + 1); void addLetter(int c)
} {
else{ // cout<<char(c+’a’)<<" "<<n<<endl;
set(gpl, i + 1 - d, pl[r][0] + 1);
set(gpl, i + 1 - d, pl[r][1] + 1); s[n++]=c;
} lastPal=getLink(lastPal);
}
} if(!Edge[lastPal][c])
} {
Len[node]=Len[lastPal]+2;
vector <int> res(n, 0); Link[node]=Edge[getLink(Link[lastPal])][c];
for (i = 0; i < n; i++) res[i] = min(pl[i + 1][0], pl[i + 1][1]); cnt[node]++;
return res; Edge[lastPal][c]=node++;
} }
} else
{
int main(){ cnt[Edge[lastPal][c]]++;
}
}
lastPal=Edge[lastPal][c];
}

9.13 Palindromic Tree void clear()


{
FOR(i,0,node+1)
class PalindromicTree
{
{
cnt[i]=0;
public:
151

ms(Edge[i],0);
} LL substr_count(int n,char *s)
n=0; {
lastPal=0; VI cnt(128);
} for(int i=0;i<n;i++)
} PTA; cnt[s[i]]++;
for(int i=1;i<128;i++)
cnt[i]+=cnt[i-1];
VI p(n);
9.14 String Split by Delimiter for(int i=0;i<n;i++)
p[--cnt[s[i]]]=i;
template<typename Out> VVI c(1,VI(n));
void split(const std::string &s, char delim, Out result) { int w=0;
std::stringstream ss(s); for(int i=0;i<n;i++)
std::string item; {
while (std::getline(ss, item, delim)) { if(i==0 || s[p[i]]!=s[p[i-1]]) w++;
*(result++) = item; c[0][p[i]] = w-1;
} }
}
for(int k=0,h=1;h<n;k++,h*=2)
std::vector<std::string> split(const std::string &s, char delim) { {
std::vector<std::string> elems; VI pn(n);
split(s, delim, std::back_inserter(elems)); for(int i=0;i<n;i++) {
return elems; pn[i] = p[i] - h;
} if(pn[i]<0) pn[i] += n;
}
// Continuous input VI cnt(w,0);
string line; for(int i=0;i<n;i++)
while( getline(cin,line) ) cnt[c[k][pn[i]]]++;
{ for(int i=1;i<w;i++)
stringstream ss( line ); // initialize kortesi cnt[i]+=cnt[i-1];
int num; vector< int > v; for(int i=n;i--;)
while( ss >> num ) v.push_back( num ); // :P p[--cnt[c[k][pn[i]]]]=pn[i];
sort( v.begin(), v.end() ); w=0;
// print routine c.push_back(VI(n));
} for(int i=0;i<n;i++)
{
if(i==0 || c[k][p[i]] != c[k][p[i-1]]) {
w++;
9.15 Suffix Array 2 } else {
int i1 = p[i] + h; if(i1>=n) i1-=n;
int i2 = p[i-1] + h; if(i2>=n) i2-=n;
// You are given two strings A and B, consisting only of lowercase
if(c[k][i1]!=c[k][i2]) w++;
letters from the English alphabet.
}
// Count the number of distinct strings S, which are substrings of A, but
c[k+1][p[i]] = w-1;
not substrings of B
152

} ll t=q-p-r;
}
t=abs(t);
LL ans = LL(n)*(n-1)/2;
for(int k=1;k<n;k++) prnt(p-t);
{ }
int i=p[k];
int j=p[k-1]; int main()
int cur = 0; {
for (int h=c.size(); h--;) // ios_base::sync_with_stdio(0);
if (c[h][i] == c[h][j]) { // cin.tie(NULL); cout.tie(NULL);
cur += 1<<h; // freopen("in.txt","r",stdin);
i += 1<<h;
j += 1<<h; int test, cases=1;
}
ans-=cur; input();
} solve();
return ans;
} return 0;
}
char s[200005];
int n, m;

void input() 9.16 Suffix Array


{
scanf("%s", s);
// sa[i] -> ith smallest suffix of the string (indexed from 1)
// height[i] -> Longest common substring between Suffix(sa[i]) and
n=strlen(s)+1;
Suffix(sa[i-1]), indexed
s[n-1]=’a’-1;
// from i=2.
// rak[i] -> The position of i th index of the main string in suffix
scanf("%s", s+n);
array.
m=strlen(s+n)+1;
// rak[6]=1 means 6th suffix is in 1st position in sa
s[n+m-1]=’a’-2;
const int N = 2e6+5;
s[n+m]=0;
int wa[N],wb[N],wv[N],wc[N];
}
int r[N],sa[N],rak[N], height[N], lg[N];
void solve()
int cmp(int *r,int a,int b,int l)
{
{
ll p=substr_count(n,s);
return r[a] == r[b] && r[a+l] == r[b+l];
ll r=substr_count(m,s+n);
}
ll q=substr_count(n+m,s)-(ll)n*m;
void da(int *r,int *sa,int n,int m)
// cout<<p<<" "<<r<<" "<<q<<endl;
{
int i,j,p,*x=wa,*y=wb,*t;
153

for( i=0;i<m;i++) wc[i]=0; int k = lg[R-L+1];


for( i=0;i<n;i++) wc[x[i]=r[i]] ++; // int k=0;
for( i=1;i<m;i++) wc[i] += wc[i-1]; // while((1<<(k+1)) <= R-L+1) k++;
for( i= n-1;i>=0;i--)sa[--wc[x[i]]] = i; return min(dp[L][k], dp[R - (1<<k) + 1][k]);
for( j= 1,p=1;p<n;j*=2,m=p){ }
for(p=0,i=n-j;i<n;i++)y[p++] = i; // Precalculate powers of two to answer askRMQ in O(1)
for(i=0;i<n;i++)if(sa[i] >= j) y[p++] = sa[i] - j; int preclg2()
for(i=0;i<n;i++)wv[i] = x[y[i]]; {
for(i=0;i<m;i++) wc[i] = 0; for(int i=2; i<N; i++)
for(i=0;i<n;i++) wc[wv[i]] ++; {
for(i=1;i<m;i++) wc[i] += wc[i-1]; lg[i]=lg[i-1];
for(i=n-1;i>=0;i--) sa[--wc[wv[i]]] = y[i]; if((i&(i-1))==0) lg[i]++;
for(t=x,x=y,y=t,p=1,x[sa[0]] = 0,i=1;i<n;i++) x[sa[i]]= }
cmp(y,sa[i-1],sa[i],j) ? p-1:p++; }

} int main()
} {
string s; cin>>s;
void calheight(int *r,int *sa,int n) int n=s.size(), cnt=0;
{
FOR(i,0,s.size())
int i,j,k=0; {
for(i=1;i<=n;i++) rak[sa[i]] = i; r[i]=s[i]-’a’+1;
for(i=0;i<n;height[rak[i++]] = k ) { // prnt(r[i]);
cnt=max(cnt,r[i]);
for(k?k--:0, j=sa[rak[i]-1] ; r[i+k] == r[j+k] ; k ++) ; }
}
r[n]=0; // This is very important, if there are testacases!
} da(r,sa,n+1,cnt+1); // cnt+1 is must, cnt=max of r[i]
calheight(r,sa,n);
int dp[N][22];
for(int i=1; i<=n; i++)
void initRMQ(int n) printf("sa[%d] = %d\n", i, sa[i]);
{
for(int i= 1;i<=n;i++) dp[i][0] = height[i]; for(int i=2; i<=n; i++)
for(int j= 1; (1<<j) <= n; j ++ ){ printf("height[%d] = %d\n", i, height[i]);
for(int i = 1; i + (1<<j) - 1 <= n ; i ++ ) {
dp[i][j] = min(dp[i][j-1] , dp[i + (1<<(j-1))][j-1]); for(int i=1; i<=n; i++)
} printf("rank[%d] = %d\n", sa[i], rak[sa[i]]);
}
// Must call initRMQ(len)
} // To find lcp between any two suffix i and j, call askRMQ(L+1,R)
// where L=min(rak[sa[i]],rak[sa[j]]), R=max(rak[sa[i]],rak[sa[j]]).
int askRMQ(int L,int R)
{
154

/* A Reminder: Sometimes when we concatenate strings, we do that by if (p == -1){


adding st[cur].link = 0;
separators. We might need to add same separator or different }
separators. else {
And it might also need to add a separator at the end of the total ll q = st[p].next[c];
strings. if (st[p].len + 1 == st[q].len){
*/ st[cur].link = q;
}
return 0; else {
} ll clone = sz++;
st[clone].len = st[p].len + 1;
st[clone].next = st[q].next;
st[clone].link = st[q].link;
9.17 Suffix Automata 2 for (; p!=-1 && st[p].next[c]==q; p=st[p].link){
st[p].next[c] = clone;
}
struct state {
distSubtringCount-=st[q].len-st[st[q].link].len;
ll len, link;
st[q].link = st[cur].link = clone;
map<char,ll, less<char> >next; // use less for kth lexiograpical
distSubtringCount+=st[q].len-st[st[q].link].len;
string evaluation
distSubtringCount+=st[clone].len-st[st[clone].link].len;
};
}
}
state st[MAX*2];
last = cur;
ll sz, last;
distSubtringCount+=st[cur].len-st[st[cur].link].len;
}
void sa_init() {
sz = last = 0;
void calc_cnt(){
st[0].len = 0;
vpl sorter;
st[0].link = -1;
f(i,0,sz) sorter.pb(mp(st[i].len, i));
st[0].next.clear();
sort(all(sorter));
++sz;
fd(i,sz-1,-1){
}
ll k = sorter[i].second;
cnt[st[k].link] += cnt[k];
ll cnt[MAX*2];
}
}
ll distSubtringCount;

void sa_extend (char c) {


ll get_cnt(string s){
ll cur = sz++;
ll now = 0;
st[cur].next.clear();
f(i,0,s.size()){
cnt[cur] = 1;
if(!st[now].next.count(s[i])) return 0;
st[cur].len = st[last].len + 1;
now = st[now].next[s[i]];
ll p;
}
for (p=last; p!=-1 && !st[p].next.count(c); p=st[p].link){
return cnt[now];
st[p].next[c] = cur;
}
}
155

if(!k) return "";


ll first_occur(string s){ for(map <char, ll> :: iterator it = st[u].next.begin(); it !=
ll now = 0; st[u].next.end(); it++){
f(i,0,s.size()){ ll num = F(it->second);
if(!st[now].next.count(s[i])) return -1; if(num < k) k -= num;
now = st[now].next[s[i]]; else{
} string ret;
return st[now].len - s.size(); ret.pb(it->first);
} ret = ret + klex(it->second, k-1);
return ret;
string lcs (string s, string t) { }
sa_init(); }
for (int i=0; i<(int)s.length(); ++i) }
sa_extend (s[i]);
ll min_cyclic_shift(string s){
int v = 0, l = 0, sa_init();
best = 0, bestpos = 0; f(i,0,s.size()) sa_extend(s[i]);
for (int i=0; i<(int)t.length(); ++i) { f(i,0,s.size()) sa_extend(s[i]);
while (v && ! st[v].next.count(t[i])) { ll now = 0;
v = st[v].link; f(i,0,s.size()) now = st[now].next.begin()->second;
l = st[v].len; return st[now].len - s.size();
} }
if (st[v].next.count(t[i])) {
v = st[v].next[t[i]]; char s[MAX];
++l;
} int main(){
if (l > best) string s;
best = l, bestpos = i; cin >> s;
} cout << min_cyclic_shift(s) << endl;
return t.substr (bestpos-best+1, best); }
}
// Kth Lexiographicaly smallest string
ll dp[MAX]; // dp[i] = number of different substring starting from i
9.18 Suffix Automata
ll F(ll u){
if(dp[u] != -1) return dp[u];
// Counts number of distinct substrings
dp[u] = 1;
for(map <char, ll> :: iterator it = st[u].next.begin(); it !=
struct suffix_automaton
st[u].next.end(); it++){
{
dp[u] += F(it->second);
map<char, int> to[MAX];
}
int len[MAX], link[MAX];
return dp[u];
int last, psz = 0;
}
void add_letter(char c)
string klex(ll u, ll k){
{
156

int p = last, cl, q; link[cl] = link[q];


if(to[p].count(c)) link[q] = cl;
{ link[last] = cl;
q = to[p][c];
if(len[q] == len[p] + 1) for(; to[p][c] == q; p = link[p])
{ to[p][c] = cl;
last = q; }
return;
} void clear()
{
cl = psz++; for(int i = 0; i < psz; i++)
len[cl] = len[p] + 1; len[i] = 0, link[i] = 0, to[i].clear();
to[cl] = to[q]; psz = 1;
link[cl] = link[q]; last = 0;
link[q] = cl; }
last = cl;
void init(string s)
for(; to[p][c] == q; p = link[p]) {
to[p][c] = cl; clear();
for(int i = 0; i < s.size(); i++)
return; add_letter(s[i]);
} }

last = psz++; suffix_automaton() {psz = 0; clear();}


len[last] = len[p] + 1; };

for(; to[p][c] == 0; p = link[p]) string s;


to[p][c] = last; suffix_automaton SA;
ll cnt[MAX];
if(to[p][c] == last) vi endpos[MAX];
{
link[last] = p; int main()
return; {
} // ios_base::sync_with_stdio(0);
// cin.tie(NULL); cout.tie(NULL);
q = to[p][c]; // freopen("in.txt","r",stdin);
if(len[q] == len[p] + 1)
{ int test, cases=1;
link[last] = q;
return; cin>>s;
}
SA.clear();
cl = psz++; FOR(i,0,s.size())
len[cl] = len[p] + 1; SA.add_letter(s[i]), cnt[SA.last]++;
to[cl] = to[q];
157

FOR(i,0,SA.psz) {
{ public:
endpos[SA.len[i]].pb(i); void insert(int val, int idx, int depth)
} {
for (int i = depth - 1; i >= 0; i--)
ll ans=0; {
bool bit = val & (1 << i);
FORr(i,SA.psz-1,1) // cout<<"bit now: "<<bit<<endl;
{ if (bit)
for(auto it: endpos[i]) {
{ Tree[idx].cntR++;
cnt[SA.link[it]]+=cnt[it]; if (Tree[idx].rIdx == -1)
ans+=(SA.len[it]-SA.len[SA.link[it]]); // distinct {
occurrences Tree[idx].rIdx = ++globalIdx;
// cnt[it] has occurrence of substring ending at node it idx = globalIdx;
} }
} else idx = Tree[idx].rIdx;
}
// cnt[x] has occurrences of state x else
// To calculate occurrence of an input string, we visit the automata {
using the letters Tree[idx].cntL++;
// of the input string and find the last_state where it finishes if (Tree[idx].lIdx == -1)
// The cnt[last_state] should be the occurrence of this string {
Tree[idx].lIdx = ++globalIdx;
prnt(ans); idx = globalIdx;
}
return 0; else idx = Tree[idx].lIdx;
} }
}
}
int query(int val, int compVal, int idx, int depth)
9.19 Trie 1 {
int ans = 0;
for (int i = depth - 1; i >= 0; i--)
struct Node
{
{
bool valBit = val & (1 << i);
int cntL, cntR, lIdx, rIdx;
bool compBit = compVal & (1 << i);
Node()
if (compBit)
{
{
cntL = cntR = 0;
if (valBit)
lIdx = rIdx = -1;
{
}
ans += Tree[idx].cntR;
};
idx = Tree[idx].lIdx;
Node Tree[MAX];
}
int globalIdx = 0;
else
class Trie
158

{ scanf("%d%d", &n, &k);


ans += Tree[idx].cntL; T.insert(0, 0, 20);
idx = Tree[idx].rIdx; int pre = 0;
} ll ans = 0;
} FOR(i, 0, n)
else {
{ scanf("%d", &x);
if (valBit) pre ^= x;
{ // prnt(pre);
idx = Tree[idx].rIdx; ans += T.query(pre, k, 0, 20);
} T.insert(pre, 0, 20);
else }
{ printf("%lld\n", ans);
idx = Tree[idx].lIdx; T.clear();
} }
} return 0;
if (idx == -1) break; }
}
return ans;
}
void clear() 9.20 Trie 2
{
for (int i = 0; i <= globalIdx; i++)
const int MaxN = 100005;
{
int sz;
Tree[i].cntL = 0;
Tree[i].cntR = 0;
int nxt[MaxN][55];
Tree[i].rIdx = -1;
int en[MaxN];
Tree[i].lIdx = -1;
}
bool isSmall(char ch)
globalIdx = 0;
{
}
return ch>=’a’ && ch<=’z’;
};
}
int main()
{
int getId(char ch)
// ios_base::sync_with_stdio(0);
{
// cin.tie(NULL); cout.tie(NULL);
if(isSmall(ch)) return ch-’a’;
// freopen("in.txt","r",stdin);
else return ch-’A’+26;
// Given an array of positive integers you have to print the
}
number of
// subarrays whose XOR is less than K.
void insert (char *s, int l)
int test, n, k, x;
{
Trie T;
int v = 0;
scanf("%d", &test);
while (test--)
for (int i = 0; i < l; ++i) {
{
159

int c=getId(s[i]);
#define MAX 100010
if (nxt[v][c]==-1) #define min(a,b) ((a)<(b) ? (a):(b))
{ #define max(a,b) ((a)>(b) ? (a):(b))
ms(nxt[sz],-1); #define clr(ar) memset(ar, 0, sizeof(ar))
nxt[v][c]=sz++; #define read() freopen("lol.txt", "r", stdin)
en[sz]=0;
// created[sz] = true; char str[MAX];
} int n, Z[MAX];

v = nxt[v][c]; void ZFunction(){ /// Z[i] = lcp of the suffix starting from i with str
} int i, j, k, l, r, p;
++en[v]; Z[0] = n, l = 0, r = 0;
} for (i = 1; i < n; i++){
if (i > r){
int search (char *tmp, int l) { k = 0;
while ((i + k) < n && str[i + k] == str[k]) k++;
int v = 0; Z[i] = k;
if (Z[i]) l = i, r = i + Z[i] - 1;
for (int i = 0; i < l; ++i) { }
else{
int c=getId(tmp[i]); p = i - l;
if (Z[p] < (r - i + 1)) Z[i] = Z[p];
if (nxt[v][c]==-1) else{
return 0; k = r + 1;
while (k < n && str[k - i] == str[k]) k++;
v = nxt[v][c]; l = i, r = k - 1;
} Z[i] = (r - l + 1);
return en[v]; }
} }
}
void init() }
{
sz=1; /// Z[i] = lcp of the suffix starting from i with str
en[0]=0; void ZFunction(char* str){ /// hash = 998923
ms(nxt[0],-1); int i, l, r, x;
}
l = 0, r = 0;
for (i = 1; str[i]; i++){
Z[i] = max(0, min(Z[i - l], r - i));
9.21 Z Algorithm while (str[i + Z[i]] && str[Z[i]] == str[i + Z[i]]) Z[i]++;
if ((i + Z[i]) > r) l = i, r = i + Z[i];
}
#include <stdio.h>
Z[0] = i;
#include <string.h>
}
#include <stdbool.h>
160

int main(){
scanf("%s", str);
n = strlen(str);
ZFunction();
return 0;
}

You might also like