USACO2022年美国公开赛金牌组问题三——Balancing a Tree

Farmer John has conducted an extensive study of the evolution of different cow breeds. The result is a rooted tree with N (2≤N≤105) nodes labeled 1…N, each node corresponding to a cow breed. For each i∈[2,N], the parent of node i is node pi (1≤pi<i), meaning that breed i evolved from breed pi. A node j is called an ancestor of node i if j=pi or j is an ancestor of pi.

Every node i in the tree is associated with a breed having an integer number of spots si. The "imbalance" of the tree is defined to be the maximum of |si−sj| over all pairs of nodes (i,j) such that j is an ancestor of i.

Farmer John doesn't know the exact value of si for each breed, but he knows lower and upper bounds on these values. Your job is to assign an integer value of si∈[li,ri] (0≤li≤ri≤10^9) to each node such that the imbalance of the tree is minimized.

INPUT FORMAT (input arrives from the terminal / stdin):
The first line contains T (1≤T≤10), the number of independent test cases to be solved, and an integer B∈{0,1}.
Each test case starts with a line containing N, followed by N−1 integers p2,p3,…,pN.

The next N lines each contain two integers li and ri.

It is guaranteed that the sum of N over all test cases does not exceed 105.

OUTPUT FORMAT (print output to the terminal / stdout):
For each test case, output one or two lines, depending on the value of B.
The first line for each test case should contain the minimum imbalance.

If B=1, then print an additional line with N space-separated integers s1,s2,…,sN containing an assignment of spots that achieves the above imbalance. Any valid assignment will be accepted.

SAMPLE INPUT:
3 0
3
1 1
0 100
1 1
6 7
5
1 2 3 4
6 6
1 6
1 6
1 6
5 5
3
1 1
0 10
0 1
9 10
SAMPLE OUTPUT:
3
1
4
For the first test case, the minimum imbalance is 3. One way to achieve imbalance 3 is to set [s1,s2,s3]=[4,1,7].

SAMPLE INPUT:
3 1
3
1 1
0 100
1 1
6 7
5
1 2 3 4
6 6
1 6
1 6
1 6
5 5
3
1 1
0 10
0 1
9 10
SAMPLE OUTPUT:
3
3 1 6
1
6 5 5 5 5
4
5 1 9
This input is the same as the first one aside from the value of B. Another way to achieve imbalance 3 is to set [s1,s2,s3]=[3,1,6].

SCORING:
Test cases 3-4 satisfy li=ri for all i.
Test cases 5-6 satisfy pi=i−1 for all i.
Test cases 7-16 satisfy no additional constraints.
Within each subtask, the first half of the test cases will satisfy B=0, and the rest will satisfy B=1.

Problem credits: Andrew Wang

USACO2022年美国公开赛金牌组问题二—解决方案

(Analysis by Benjamin Qi)
First, discard all occurrences of ×1 since they don't affect the answer. Also, if a program contains an occurrence of ×0, discard the portion of the program before the last such occurrence.

Say that two instructions are of the same type if they are both ×d or both +s. When do two interleaved programs produce the same expression? It turns out that this happens if and only if one interleaved program can be transformed into the other by repeatedly swapping two adjacent instructions of the same type, where one of the instructions belongs to Bessie and the other belongs to Elsie.

Therefore, the number of distinct expressions is precisely the number of interleaved programs that do not contain a pair of adjacent instructions of the same type where the first instruction belongs to Elsie and the second instruction belongs to Bessie, because every such program that contains such a pair can be uniquely transformed via a series of swaps into a program that does not contain such a pair (while there exists such a pair, swap the two instructions in the pair).

The full solution involves dynamic programming on a grid. In the code below, dp[i][j][k] is the number of ways to interleave the first i instructions of Bessie's program with the first j instructions of Elsie's program such that the last instruction in the interleaving belongs to Bessie if k=0 or Elsie if k=1. dp[i][j][k] is used to update both dp[i][j+1][1] (if Elsie adds her j-th instruction to the end of the interleaving) and dp[i+1][j][0] (if Bessie adds her i-th instruction to the end of the interleaving) unless Elsie last added to the interleaving and the j−1-th instruction of Elsie's program has the same type as the i-th instruction of Bessie's program.

The overall time complexity is proportional to the number of DP states, which is O(N2).

#include <bits/stdc++.h>
using namespace std;

template <class T> using V = vector<T>;

const int MOD = 1e9 + 7;
void mod_add(int &a, int b) { a = (a + b) % MOD; }

void read(string &s) {
string _s;
cin >> _s;
for (char c : _s) {
if (c == '1') continue;
if (c == '0') s.clear();
if (c != '+') c = '2';
s += c;
}
}

int solve() {
int N;
cin >> N;
string A, B;
read(A);
read(B);
V<V<array<int, 2>>> dp((int)size(A) + 1,
V<array<int, 2>>((int)size(B) + 1));
int ans = 0;
auto upd = [&](int x, int y, int k, int v) {
if (x <= (int)size(A) && y <= (int)size(B))
mod_add(dp.at(x).at(y).at(k), v);
};
dp.at(0).at(0).at(0) = 1;
for (int i = 0; i <= (int)size(A); ++i) {
for (int j = 0; j <= (int)size(B); ++j) {
for (int k : {0, 1}) {
int v = dp.at(i).at(j).at(k);
if (v == 0) continue;
if (i == (int)size(A) && j == (int)size(B)) mod_add(ans, v);
else {
upd(i, j + 1, 1, v);
if (k == 0) upd(i + 1, j, 0, v);
else {
assert(j > 0);
if (i < (int)size(A) && B.at(j - 1) != A.at(i))
upd(i + 1, j, 0, v);
}
}
}
}
}
return ans;
}

int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int T;
cin >> T;
while (T--) cout << solve() << "\n";
}

USACO2022年美国公开赛金牌组问题二——Pair Programming

A program consists of a sequence of instructions, each of which is of one of the following forms:

×d, where d is a digit in the range [0,9]
+s, where s is a string denoting the name of a variable. Within a program, all variable names must be distinct.
The result of executing a program is defined to be the expression that results after applying each instruction in order, starting with 0. For example, the result of executing the program [×3,+x,+y,×2,+z] is the expression (0×3+x+y)×2+z=2×x+2×y+z. Different programs, when executed may produce the same expressions; for example, executing [+w,×0,+y,+x,×2,+z,×1] would also result in the expression 2×x+2×y+z.

Bessie and Elsie each have programs of N (1≤N≤2000) instructions. They will interleave these programs to produce a new program of length 2N. Note that there are (2N)!N!×N! ways to do this, but not all such programs, when executed, will produce distinct expressions.

Count the number of distinct expressions that may be produced by executing Bessie and Elsie's interleaved program, modulo 109+7.

Each input contains T (1≤T≤10) test cases that should be solved independently. It is guaranteed that the sum of N over all test cases does not exceed 2000.

INPUT FORMAT (input arrives from the terminal / stdin):
The first line of the input contains T, the number of test cases.
The first line of each test case contains N.

The second line of each test case contains Bessie's program, represented by a string of length N. Each character is either a digit d∈[0,9], representing an instruction of type 1, or the character +, representing an instruction of type 2.

The third line of each test case contains Elsie's program in the same format as Bessie's.

Within a test case, the variable names among all instructions are distinct. Note that their actual names are not provided, as they do not affect the answer.

OUTPUT FORMAT (print output to the terminal / stdout):
The number of distinct expressions that may be produced by executing Bessie and Elsie's interleaved programs, modulo 109+7.
SAMPLE INPUT:
4
1
0
1
3
12+
+02
3
0++
++9
4
5+++
+6+1
SAMPLE OUTPUT:
1
3
9
9
For the first test case, the two possible interleaved programs are [×1,×0] and [×0,×1]. These will both produce the expression 0 when executed.

For the second test case, executing an interleaving of [×1,×2,+x] and [+y,×0,×2] could produce one of the expressions 0, x, or 2×x.

SCORING:
Input 2 satisfies N≤6.
In inputs 3-5, the sum of all N is at most 100.
In inputs 6-8, the sum of all N is at most 500.
Inputs 9-16 satisfy no additional constraints.
Problem credits: Benjamin Qi

USACO2022美国公开赛金牌组问题一解决方案

(Analysis by Danny Mittal)
Visualize the cows and apples as being on a plane where position is the x-axis and time is the y-axis. For a given cow that arrives at time t to position x, the possible apples that that cow could catch are the ones landing at time t′ at position x′ that satisfy t′≥t and |x′−x|≤|t′−t|. This region in the plane is a sort of infinite V shape with 45 degree angles extending upwards from the point (x,t).

If we rotate the entire plane 45 degrees clockwise (transform each point (x,t) to (u,v)=(t+x2√,t−x2√)), then those infinite V shapes become infinite squares, which means that the condition for a cow (u,v) to be able to catch an apple (u′,v′) is simply u≤u′ and v≤v′.

Let's use that transformation. We can simply ignore the 2–√ factor as it doesn't change the condition. For now, we will assume that n is 1, so each line of the input represents a single cow or apple.

We can take a greedy approach to this problem. Consider the apple a with smallest v, assuming that it can be caught by at least one cow, and out of all cows that could catch it, consider the cow c with largest u. There is no apple that c can catch that any other cow that can catch a cannot catch, because all of them have u at most the u of c, and since they can all catch a, all of them have a v not larger than the v of any apple that can be caught at all. Therefore, it is optimal to assign c to catch a.

Based on this greedy principle, we can devise an algorithm to solve this problem. First, sort all the cows and all the apples by v. Now, for each apple in increasing order of v, we will find the optimal cow to catch it. We will do this by maintaining a BST (binary search tree) of the cows that have v at most the v of the current apple, with the BST being sorted by u. For each apple a, we will first add in all the cows with v at most the v of a that haven't been added yet. Then, we will query our BSTs to find the cow in the BST with the largest u such that the u is still small enough to catch a. If there is such a cow, then we use it to catch a, meaning that we remove it from the BST and increment our answer.

This algorithm runs in O(NlgN) as we add and remove each cow to/from the BST at most once, and we query the BST once for each apple.

It remains to handle the fact that each line of the input can represent multiple cows or apples. To handle this, we modify the part where we find the optimal cow to catch each apple. We will instead repeatedly find the optimal group of cows for the current group of apples, and assign as many cows as possible to apples. At each step of this algorithm, either the group of cows will be exhausted, or the group of apples will be exhausted. Therefore, the amount of steps is N, and so this algorithm still runs in O(NlgN).

Java code:

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.*;

public class AppleCatching {

public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
int n = Integer.parseInt(in.readLine());
List<Stuff> cows = new ArrayList<>();
List<Stuff> apples = new ArrayList<>();
for (int j = 1; j <= n; j++) {
StringTokenizer tokenizer = new StringTokenizer(in.readLine());
int q = Integer.parseInt(tokenizer.nextToken());
int time = Integer.parseInt(tokenizer.nextToken());
int location = Integer.parseInt(tokenizer.nextToken());
int amount = Integer.parseInt(tokenizer.nextToken());
int y = time - location;
int x = time + location;
(q == 1 ? cows : apples).add(new Stuff(amount, y , x));
}
Collections.sort(cows, Comparator.comparingInt(cow -> cow.y));
Collections.sort(apples, Comparator.comparingInt(apple -> apple.y));
TreeSet<Stuff> treeSet = new TreeSet<>((cow1, cow2) -> {
if (cow1.x != cow2.x) {
return cow1.x - cow2.x;
} else {
return cow1.y - cow2.y;
}
});
int j = 0;
int answer = 0;
for (Stuff apple : apples) {
while (j < cows.size() && cows.get(j).y <= apple.y) {
treeSet.add(cows.get(j));
j++;
}
int amount = apple.amount;
while (amount > 0 && !treeSet.isEmpty() && treeSet.first().x <= apple.x) {
Stuff cow = treeSet.floor(new Stuff(0, 1000000000, apple.x));
treeSet.remove(cow);
int caught = Math.min(amount, cow.amount);
answer += caught;
amount -= caught;
if (caught < cow.amount) {
treeSet.add(new Stuff(cow.amount - caught, cow.y, cow.x));
}
}
}
System.out.println(answer);
}

static class Stuff {
final int amount;
final int y;
final int x;

Stuff(int amount, int y, int x) {
this.amount = amount;
this.y = y;
this.x = x;
}
}
}

USACO2022美国公开赛金牌组问题1. Apple Catching

It's raining apples! At certain points in time, some number of apples will hit the number line. At certain points in time, some of Farmer John's cows will arrive on the number line and start catching apples.

If an apple hits the number line without a cow to catch it, it is lost forever. If a cow and an apple arrive at the same time, the cow catches it. Each cow can travel one unit per second. Once a cow catches a single apple, she exits the number line.

If FJ's cows collaborate optimally, how many apples can they catch in total?

INPUT FORMAT (input arrives from the terminal / stdin):
The first line contains N (1≤N≤2⋅105), the number of times apples hit the number line or FJ's cows appear.
The next N lines each contain four integers qi, ti, xi, and ni (qi∈{1,2},0≤ti≤109,0≤xi≤10^9,1≤ni≤10^3).

If qi=1, this means that ni of FJ's cows arrive on the number line at time ti at location xi.
If qi=2, this means that ni apples will hit the number line at time ti at location xi.
It is guaranteed that all of the ordered pairs (ti,xi) are distinct.

OUTPUT FORMAT (print output to the terminal / stdout):
The maximum number of apples FJ's cows may collectively catch.
SAMPLE INPUT:
5
2 5 10 100
2 6 0 3
2 8 10 7
1 2 4 5
1 4 7 6
SAMPLE OUTPUT:
10
In this example, none of the 100 apples that land at time t=5 may be caught. Here is a way for 10 apples to be caught:

All six of FJ's cows that arrive at time t=4 catch one of the apples that land at time t=8.
One of FJ's cows that arrive at time t=2 catches one of the apples that land at time t=8.
Three of the remaining cows that arrive at time t=2 catch one of the apples that land at time t=6.
SAMPLE INPUT:
5
2 5 10 100
2 6 0 3
2 8 11 7
1 2 4 5
1 4 7 6
SAMPLE OUTPUT:
9
Here again, none of the apples that land at time t=5 may be caught. Furthermore, none of the cows that arrive at time t=2 may catch any of the apples that land at time t=8. Here is a way for 9 apples to be caught:

All six of FJ's cows that arrive at time t=4 catch one of the apples that land at time t=8.
Three of the remaining cows that arrive at time t=2 catch one of the apples that land at time t=6.
Problem credits: Benjamin Qi

USACO2022美国公开赛金牌组问题三—解决方案

(Analysis by Danny Mittal)
We will refer to the string of Us and Ds as s.

Subtask 1: N≤500

We can do DP. For each triple (j,k,u), our DP says whether there exists a subsequence of p1,…,pk consistent with s1…sj that ends in the number u. Transitions are constant time, so the runtime is O(N3).

Subtask 2: N≤5000

We can improve the DP from the previous subtask by noting that depending on whether sj+1 is U or D, it's optimal for the previous number in the subsequence to be as small or as large as possible. Therefore, our DP should find for each pair (j,k) both the smallest and the largest numbers such that there exists a subsequence of p1,…,pk consistent with s1…sj that ends in that number. Transitions are again constant time, so the runtime is O(N2).

Subtask 3: s is Us followed by Ds

Apply a standard LIS algorithm to find for each number the longest LIS ending in that number, as well as the longest LIS ending in that number that goes in reverse (starting from the end of p). Now let the number of Us in s be j. If there is no LIS of p with j+1, then we simply use the longest LIS that we found. Otherwise, find the first position in p where we have an LIS of length j+1, then use the longest reverse LIS that ends at or after that position to compute the answer.

The runtime is O(NlgN) using an appropriate LIS algorithm.

Subtask 4: No additional constraints

Intuitively, it makes sense for us to try to find the fastest (ending the earliest) subsequence of p consistent with each prefix of s, and build on that subsequence to find the fastest subsequence for the next prefix of s. Unfortunately, this idea doesn't even work for normal LIS (longest increasing subsequence), i. e. the case where s is UUUU..., because we can have a case like

p=[5,6,7,1,2,3,4]

where the fastest increasing subsequence of length 3 is [5,6,7], but the fastest one of length 4 is [1,2,3,4] which doesn't build on [5,6,7] at all.

However, it turns out that we can actually apply this idea when switching from contiguous segments of U to contiguous segments of D and vice versa. Specifically, suppose that sj is D, and the next x letters in s are U. If the fastest subsequence a of p consistent with s1…sj ends at index k, then consider finding the fastest LIS b of length x+1 in p considering only positions from k onwards. Say that b ends at position k′.

It's clear that the fastest subsequence of p consistent with s1…sj+x must end at or after position k′, because clearly it's not possible to find a subsequence consistent with s1…sj then an LIS of length x+1 that starts with the previous subsequence prior to k′.

However, we can actually use a and b to create a subsequence consistent with s1…sj+x that ends at position k′. If the last number aj in a and the first number b0 in b are the same, then we're done, because we can simply attach them at that number. Otherwise, note that it can't be that aj<b0, because otherwise we could add aj to the beginning of b and remove b's last element to get an LIS of length x+1 that ends earlier. Therefore, it must be that aj>b0. However, since sj is D, we can actually take a, remove aj, then add on b0, which is valid because aj−1>aj>b1 so the D is still satisfied, and now use b0 to glue together a and b.

Therefore, the end position of the fastest subsequence of p consistent with s1…sj+x is simply the end position of the fastest LIS of p considering only positions starting from the end position of the fastest subsequence consistent with s1…sj. All of this also applies when U and D are switched.

This means that our algorithm can work as follows. We divide s into its contiguous segments of U and D, and for each segment find the fastest LIS or LDS of the length of the segment +1 that only considers the part of p starting (inclusive) from the end of the previous LDS or LIS. We finish when we are unable to find an LIS or LDS of the appropriate length for some segment and simply use the longest one we were able to find for that last segment to compute the answer.

To find LISs and LDSs, we can simply apply one of the standard algorithms for finding LIS, but we have to be careful that the algorithm we use runs in time a function of the number of elements we consider, not a function of N, as there can potentially be many segments in s. An elegant choice given this constraint is the binary search tree algorithm for LIS, which is used in the Java code below.

Assuming our LIS algorithm runs in O(xlgx) where x is the number of elements we consider, the runtime of our overall algorithm is O(NlgN).

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Comparator;
import java.util.StringTokenizer;
import java.util.TreeSet;

public class UpDownSubsequence {

public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
int n = Integer.parseInt(in.readLine());
StringTokenizer tokenizer = new StringTokenizer(in.readLine());
char[] directions = in.readLine().toCharArray();
int k = 0;
int last = Integer.parseInt(tokenizer.nextToken());
while (tokenizer.hasMoreTokens() && k < directions.length) {
char direction = directions[k];
TreeSet<Integer> treeSet = new TreeSet<Integer>(direction == 'U' ? Comparator.naturalOrder() : Comparator.reverseOrder());
treeSet.add(last);
while (tokenizer.hasMoreTokens() && k < directions.length && directions[k] == direction) {
last = Integer.parseInt(tokenizer.nextToken());
Integer displaced = treeSet.higher(last);
if (displaced == null) {
k++;
} else {
treeSet.remove(displaced);
}
treeSet.add(last);
}
}
System.out.println(k);
}
}

USACO2022美国公开赛金牌组问题三——Up Down Subsequence

Farmer John's N cows (2≤N≤3⋅105), conveniently numbered 1…N as usual, have ordered themselves according to a permutation p1,p2,…,pN of 1…N. You are also given a string of length N−1 consisting of the letters U and D. Please find the maximum K≤N−1 such that there exists a subsequence a0,a1,…,aK of p such that for all 1≤j≤K, aj−1<aj if the jth letter in the string is U, and aj−1>aj if the jth letter in the string is D.

INPUT FORMAT (input arrives from the terminal / stdin):
The first line contains N.
The second line contains p1,p2,…,pN.

The last line contains the string.

OUTPUT FORMAT (print output to the terminal / stdout):
Write out maximum possible value of K.
SAMPLE INPUT:
5
1 5 3 4 2
UDUD
SAMPLE OUTPUT:
4
We can choose [a0,a1,a2,a3,a4]=[p1,p2,p3,p4,p5]; the entire permutation is consistent with the string.

SAMPLE INPUT:
5
1 5 3 4 2
UUDD
SAMPLE OUTPUT:
3
We can choose [a0,a1,a2,a3]=[p1,p3,p4,p5].

SCORING:
Test cases 3-4 satisfy N≤500.
Test cases 5-8 satisfy N≤5000.
In test cases 9-12, the string consists of Us followed by Ds.
Test cases 13-22 satisfy no additional constraints.
Problem credits: Danny Mittal

USACO2022美国公开赛白金组问题二—解决方案

(Analysis by Danny Mittal)
Subtask 1: N≤100, M≤200
We can solve this subtask by creating a new graph of pairs of nodes from the old graph, where each pair (a,b) represents a game state where one token is on a and one token is on b. We can perform a search where we repeatedly remove nodes that are winning for the brain.
In order to do this, we maintain for each pair (a,b) the amount of remaining pairs (c,b) such that a→c is an edge in the original graph, and a similar amount of remaining pairs (a,c). In the process of removing pairs, if one of these amounts becomes 0 for a certain pair (a,b), then the brain can choose that token in order to win, so we remove (a,b) as well, then decrement the amounts for the appropriate other pairs by looking at incoming edges to a and b.
At the end of this process, any remaining pairs represent game states from which the brain cannot win. We can then answer queries by simply checking whether they are a pair that was removed or not.
The bottleneck in this algorithm is the part after removing a pair when we decrement the other appropriate pairs' amounts. Each edge a→c is potentially considered as part of pairs (c,b) and (b,c) for all b, making the worst case runtime O(N) per edge and so O(NM) overall. This is far under the time limit, so less efficient variants of this solution could also have passed.
Subtask 2: N≤5000
First note that if a node with a token on it has no outgoing edges, then the brain can win by simply choosing the token on that node to leave the hoof without any moves. This means that we can mark the nodes as such and then simply remove them from the graph. Furthermore, any nodes that now have no outgoing edges also represent a win for the brain, because the brain can repeatedly choose the token on those nodes, and eventually the token will reach a node with no outgoing edges. Therefore, we can repeatedly remove all nodes with no outgoing edges from the graph until all nodes remaining have at least one outgoing edge.
Now, consider a node a with only a single outgoing edge to a different node b. Any token on a can clearly be moved to b. This means that we don't need to really consider a as being separate from b; if the brain can force the hoof to lose by making it so that both tokens would end up at a, the brain can also just force the hoof to lose by making it so that both tokens would end up at b, by making each token move once more. Therefore, we can "merge" a into b, meaning that b inherits all of a's incoming edges. Then, just like before, some new nodes may now have only one outgoing edge because they previously had edges only to a and b, so we can merge those as well. At the end of this process, all nodes remaining in the graph will either have at least two outgoing edges, or a single edge to themself.
We now make the critical observation that in a graph where every node has at least two outgoing edges, if the tokens start at different nodes, then no matter which token the brain picks each time, the hoof always has a valid node to move it into that isn't the location of the other node, and so the hoof always wins. This extends to graphs that also have nodes with only a single edge to themselves, because the hoof can just move the token across that single edge back to the same node, since the other token is at a different node.
Therefore, after applying the above two reductions, we can answer a query for starting nodes a and b as follows:
If either a or b was removed from the graph in the first reduction, then the brain wins. Otherwise, if a and b became the same node after the merging process in the second reduction, the brain still wins. Otherwise, the hoof wins.
The two above reductions can be done fairly straightforwardly in O(N2) by maintaining a set of incoming edges and a set of outgoing edges for each node, then using DFS or BFS. Each query is answered in constant time, for an overall runtime of O(N2+Q).
Subtask 3: No additional constraints
The first reduction should actually already run in linear time if you use sets as suggested above. To improve the runtime of the second reduction, we can use small to large merging and union find. When we merge a into b, instead of simply adding all of a's incoming edges to b's, we add whichever set is smaller to whichever set is larger. This means that each edge is added to a set at most lgN times. We then use union find to keep track of which node each node has been merged into. Whenever we find a node a to merge into a node b, we need to make sure to instead merge the union find root of a into the union find root of b.
As each edge is merged at most lgN times, the overall runtime becomes O(MlgN+Q).
Java code:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.HashSet;
import java.util.Set;
import java.util.Stack;
import java.util.StringTokenizer;
public class HoofAndBrain {
static int[] union;
static int find(int u) {
if (union[u] != union[union[u]]) {
union[u] = find(union[u]);
}
return union[u];
}
public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
StringTokenizer tokenizer = new StringTokenizer(in.readLine());
int n = Integer.parseInt(tokenizer.nextToken());
int m = Integer.parseInt(tokenizer.nextToken());
Set<Integer>[] adj = new Set[n + 1];
Set<Integer>[] rev = new Set[n + 1];
union = new int[n + 1];
for (int a = 1; a <= n; a++) {
adj[a] = new HashSet<>();
rev[a] = new HashSet<>();
union[a] = a;
}
for (; m > 0; m--) {
tokenizer = new StringTokenizer(in.readLine());
int a = Integer.parseInt(tokenizer.nextToken());
int b = Integer.parseInt(tokenizer.nextToken());
adj[a].add(b);
rev[b].add(a);
}
Stack<Integer> stack = new Stack<>();
for (int a = 1; a <= n; a++) {
if (adj[a].isEmpty()) {
stack.push(a);
}
}
while (!stack.isEmpty()) {
int a = stack.pop();
union[a] = 0;
for (int b : rev[a]) {
adj[b].remove(a);
if (adj[b].isEmpty()) {
stack.push(b);
}
}
}
for (int a = 1; a <= n; a++) {
if (adj[a].size() == 1) {
stack.push(a);
}
}
while (!stack.isEmpty()) {
int a = stack.pop();
int c = 0;
for (int b : adj[a]) {
c = b;
}
a = find(a);
c = find(c);
if (a != c) {
if (rev[a].size() > rev[c].size()) {
int temp = a;
a = c;
c = temp;
}
for (int b : rev[a]) {
adj[b].remove(a);
adj[b].add(c);
if (adj[b].size() == 1) {
rev[c].remove(b);
stack.push(b);
} else {
rev[c].add(b);
}
}
union[a] = c;
}
}
StringBuilder out = new StringBuilder();
for (int q = Integer.parseInt(in.readLine()); q > 0; q--) {
tokenizer = new StringTokenizer(in.readLine());
int a = Integer.parseInt(tokenizer.nextToken());
int b = Integer.parseInt(tokenizer.nextToken());
int u = find(a);
int v = find(b);
if (u == 0 || v == 0 || u == v) {
out.append('B');
} else {
out.append('H');
}
}
System.out.println(out);
}
}
Exercise: Solve the problem where the hoof wins by not having a valid move, and the brain wins by the game continuing indefinitely.

USACO2022美国公开赛白金组问题 2——Hoof and Brain

Given a directed graph with N vertices and M edges (2≤N≤10^5, 1≤M≤2·10^5), Farmer John's cows like to play the following game with two players.
Place two tokens on distinct nodes in the graph. Each turn, one player, the brain, will choose a token that must be moved along an outgoing edge. The other player, the hoof, will choose which edge to move the token along. The two tokens can never be on the same node. If at some point the hoof can't make a valid move, the brain wins. If the game continues indefinitely, the hoof wins.
You are given Q queries (1≤Q≤10^5) indicating the starting nodes of the two tokens. For each query, output which player will win.
INPUT FORMAT (input arrives from the terminal / stdin):
The first line contains N and M.
The next M lines each contain two integers a and b, denoting an edge from a to b.
The graph does not contain self-loops or multiple edges.
The next line contains Q.
The final Q lines each contain two integers x and y satisfying 1≤x,y≤N and x≠y, indicating the starting nodes of the tokens.
OUTPUT FORMAT (print output to the terminal / stdout):
A string of length Q, where each character is B for the brain winning and H for the hoof winning.
**Note: the time limit for this problem is 4s, twice the default.**
SAMPLE INPUT:
9 10
1 2
2 3
3 4
4 7
3 5
1 6
6 8
8 9
9 6
7 2
4
1 5
1 2
1 6
2 4
SAMPLE OUTPUT:
BHHB
The brain can win the first game by selecting node 5; then the hoof has no valid move.
The brain can win the last game by selecting node 4 and then node 7; then the hoof has no valid move.
The hoof wins the other games.
SCORING:
Test cases 2-3 satisfy N≤100, M≤200.
Test cases 4-9 satisfy N≤5000.
Test cases 10-21 satisfy no additional constraints.
Problem credits: Danny Mittal

USACO2022美国公开赛白金组问题一解决方案

(Analysis by Benjamin Qi)
I will refer to contiguous sequences as intervals. Define the value of an interval to be the minimum possible final number it can be converted into.
Subtask 1: Similar to 248, we can apply dynamic programming on ranges. Specifically, if dp[i][j] denotes the value of interval i…j, then
dp[i][j]={Aaimini≤k<jmax(dp[i][k],dp[k+1][j])+1i=ji<j.
The time complexity is O(N3).
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
template <class T> void ckmin(T &a, const T &b) { a = min(a, b); }
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
for (int &x : A) cin >> x;
V<V<int>> dp(N, V<int>(N));
for (int i = N - 1; i >= 0; --i) {
dp.at(i).at(i) = A.at(i);
for (int j = i + 1; j < N; ++j) {
dp[i][j] = INT_MAX;
for (int k = i; k < j; ++k) {
ckmin(dp.at(i).at(j),
max(dp.at(i).at(k), dp.at(k + 1).at(j)) + 1);
}
}
}
int64_t ans = 0;
for (int i = 0; i < N; ++i) {
for (int j = i; j < N; ++j) {
ans += dp.at(i).at(j);
}
}
cout << ans << "\n";
}
Subtask 2: We can optimize the solution above by more quickly finding the maximum k′ such that dp[i][k′]≤dp[k′+1][j]. Then we only need to consider k∈{k′,k′+1} when computing dp[i][j]. Using the observation that k′ does not decrease as j increases and i is held fixed leads to a solution in O(N2):
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
template <class T> void ckmin(T &a, const T &b) { a = min(a, b); }
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
for (int &x : A) cin >> x;
V<V<int>> dp(N, V<int>(N));
for (int i = N - 1; i >= 0; --i) {
dp.at(i).at(i) = A.at(i);
int k = i - 1;
for (int j = i + 1; j < N; ++j) {
while (k + 1 < j && dp.at(i).at(k + 1) <= dp.at(k + 2).at(j)) ++k;
dp[i][j] = INT_MAX;
ckmin(dp[i][j], dp.at(k + 1).at(j));
ckmin(dp[i][j], dp.at(i).at(k + 1));
++dp[i][j];
}
}
int64_t ans = 0;
for (int i = 0; i < N; ++i) {
for (int j = i; j < N; ++j) {
ans += dp.at(i).at(j);
}
}
cout << ans << "\n";
}
Alternatively, finding k′ with binary search leads to a solution in O(N2logN).
Subtask 3: Similar to 262144, we can use binary lifting.
For each of v=1,2,3,…, I count the number of intervals with value at least v. The answer is the sum of this quantity over all v.
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
for (int &x : A) cin >> x;
V<V<int>> with_val;
for (int i = 0; i < N; ++i) {
while (size(with_val) <= A[i]) with_val.emplace_back();
with_val.at(A[i]).push_back(i);
}
V<int> nex(N + 1);
iota(all(nex), 0);
int64_t ans = 0;
for (int v = 1;; ++v) {
if (nex[0] == N) {
cout << ans << "\n";
exit(0);
}
// add all intervals with value >= v
for (int i = 0; i <= N; ++i) ans += N - nex[i];
for (int i = 0; i <= N; ++i) nex[i] = nex[nex[i]];
if (v < size(with_val)) {
for (int i : with_val.at(v)) {
nex[i] = i + 1;
}
}
}
}
Subtask 4: For each i from 1 to N in increasing order, consider the values of all intervals with right endpoint i. Note that the value v of each such interval must satisfy v∈[Ai,Ai+?log2i?] due to A being sorted. Thus, it suffices to be able to compute for each v the minimum l such that dp[l][i]≤v. To do this, we maintain a partition of 1…i into contiguous subsequences such that every contiguous subsequence has value at most Ai and is leftwards-maximal (extending any subsequence one element to the left would cause its value to exceed Ai). When transitioning from i?1 to i, we merge every two consecutive contiguous subsequences Ai?Ai?1 times and then add contiguous subsequence [i,i] to the end of the partition.
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
vector<int> A(N);
for (int &x : A) cin >> x;
assert(is_sorted(all(A)));
// left endpoints of each partition interval in decreasing order
deque<int> left_ends;
int64_t ans = 0;
for (int i = 0; i < N; ++i) {
if (i) {
for (int v = A[i - 1]; v < A[i]; ++v) {
if (size(left_ends) == 1) break;
// merge every two consecutive intervals in partition
deque<int> n_left_ends;
for (int j = 0; j < (int)size(left_ends); ++j) {
if ((j & 1) || j + 1 == (int)size(left_ends)) {
n_left_ends.push_back(left_ends[j]);
}
}
swap(left_ends, n_left_ends);
}
}
left_ends.push_front(i); // add [i,i] to partition
int L = i + 1;
for (int v = A[i]; L; ++v) {
int next_L = left_ends.at(
min((int)size(left_ends) - 1, (1 << (v - A[i])) - 1));
ans += (int64_t)(L - next_L) * v;
L = next_L;
}
}
cout << ans << "\n";
}
Full Credit: Call an interval relevant if it is not possible to extend it to the left or to the right without increasing its value.
Claim: The number of relevant intervals is O(NlogN).
Proof: See the end of the analysis.
We'll compute the same quantities as in subtask 3, but this time, we'll transition from v?1 to v in time proportional to the number of relevant intervals with value v?1 plus the number of relevant intervals with value v, this will give us a solution in O(NlogN).
For a fixed value of v, say that an interval [l,r) is a segment if it contains no value greater than v, and min(Al?1,Ar)>v. Say that an interval is maximal with respect to v if it has value at most v and extending it to the left or right would cause its value to exceed v. Note that a maximal interval [l,r) must be relevant, and it must either have value equal to v or be a segment.
My code follows. ivals[i] stores all maximal intervals contained within the segment with left endpoint i. The following steps are used to transition from v?1 to v:
Apply halve on all segments containing more than one maximal interval (the left endpoints of every such segment are stored by active). Before, all intervals within the segment were maximal with respect to v?1. After, all intervals within the segment are maximal with respect to v.
Add a segment and a maximal interval of the form [i,i+1) for each i satisfying Ai=v, and then merge adjacent segments.
#include <bits/stdc++.h>
using namespace std;
template <class T> using V = vector<T>;
#define all(x) begin(x), end(x)
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int N;
cin >> N;
V<int> A(N);
V<V<int>> with_A;
for (int i = 0; i < N; ++i) {
cin >> A[i];
while ((int)size(with_A) <= A[i]) with_A.emplace_back();
with_A[A[i]].push_back(i);
}
// sum(l ... r)
auto sum_arith = [&](int64_t l, int64_t r) {
return (r + l) * (r - l + 1) / 2;
};
// total number of intervals covered by list of maximal intervals
auto contrib = [&](const list<pair<int, int>> &L) {
int64_t ret = 0;
for (auto it = begin(L);; ++it) {
auto [x, y] = *it;
if (next(it) == end(L)) {
ret += sum_arith(0, y - x);
break;
} else {
int next_x = next(it)->first;
ret += int64_t(next_x - x) * y - sum_arith(x, next_x - 1);
}
}
return ret;
};
int64_t num_at_least = (int64_t)N * (N + 1) / 2;
auto halve = [&](list<pair<int, int>> &L) {
if (size(L) <= 1) return;
num_at_least += contrib(L);
int max_so_far = -1;
list<pair<int, int>> n_L;
auto it = begin(L);
for (auto [x, y] : L) {
while (next(it) != end(L) && next(it)->first <= y) ++it;
if (it->second > max_so_far) {
n_L.push_back({x, max_so_far = it->second});
}
}
swap(L, n_L);
num_at_least -= contrib(L);
};
// doubly linked list to maintain segments
V<int> pre(N + 1);
iota(all(pre), 0);
V<int> nex = pre;
int64_t ans = 0;
V<int> active; // active segments
// maximal intervals within each segment
V<list<pair<int, int>>> ivals(N + 1);
for (int v = 1; num_at_least; ++v) {
ans += num_at_least; // # intervals with value >= v
V<int> n_active;
for (int l : active) {
halve(ivals[l]);
if (size(ivals[l]) > 1) n_active.push_back(l);
}
if (v < (int)size(with_A)) {
for (int i : with_A[v]) {
int l = pre[i], r = nex[i + 1];
bool should_add = size(ivals[l]) <= 1;
pre[i] = nex[i + 1] = -1;
nex[l] = r, pre[r] = l;
ivals[l].push_back({i, i + 1});
--num_at_least;
ivals[l].splice(end(ivals[l]), ivals[i + 1]);
should_add &= size(ivals[l]) > 1;
if (should_add) n_active.push_back(l);
}
}
swap(active, n_active);
}
cout << ans << "\n";
}
Proof of Claim: Let f(N) denote the maximum possible number of relevant subarrays for a sequence of size N. We can show that f(N)≤O(logN!)≤O(NlogN). This upper bound can be attained when all elements of the input sequence are equal.
The idea is to consider a Cartesian tree of a. Specifically, suppose that one of the maximum elements of a is located at position p (1≤p≤N). Then
f(N)≤f(p?1)+f(N?p)+#(relevant intervals containing p).
WLOG we may assume 2p≤N.
Lemma:
#(relevant intervals containing p)≤O(plog(Np))
Proof of Lemma: We can in fact show that
#(relevant intervals containing p with value ap+k)≤min(p,2k):0≤k≤?log2N?.
The p comes from all relevant intervals with a fixed value having distinct left endpoints and the 2k comes from the fact that to generate a relevant interval of value ap+k+1 containing p, you must start with a relevant interval of value ap+k and choose to extend it either to the right or to the left.
To finish, observe that the summation ∑?log2N?k=0#(relevant intervals containing p with value ap+k) is dominated by the terms satisfying log2p≤k≤?log2N?. ■
Since O(plogNp)≤O(log(Np))≤O(logN!(p?1)!(N?p)!)
f(N)≤f(p?1)+f(N?p)+#(relevant intervals containing p)≤O(log(p?1)!)+O(log(N?p)!)+O(logN!?log(p?1)!?log(N?p)!)≤O(logN!).
Here is an alternative approach by Danny Mittal that uses both the idea from the subtask 4 solution and the Cartesian tree. He repeatedly finds the index of the rightmost maximum amida1…mid?1amid+1…NamidO(NlogN)
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.*;
public class Revisited262144Array {
static int n;
static int[] xs;
static int[] left;
static int[] right;
static int[] forward;
static int[] reverse;
static long answer = 0;
public static int reduceForward(int start, int length, int lgFactor) {
if (lgFactor == 0) {
return length;
}
int factor = 1 << Math.min(lgFactor, 20);
int j = start;
for (int k = start + factor - 1; k < start + length - 1; k += factor) {
forward[j++] = forward[k];
}
forward[j++] = forward[start + length - 1];
return j - start;
}
public static void reduceReverse(int start, int length, int lgFactor) {
if (lgFactor == 0) {
return;
}
int factor = 1 << Math.min(lgFactor, 20);
if (length > factor) {
int j = start + 1;
for (int k = start + 1 + ((length - factor - 1) % factor); k < start + length; k += factor) {
reverse[j++] = reverse[k];
}
}
}
public static int funStuff(int from, int mid, int to, int riseTo) {
if (from > to) {
return 0;
}
int leftLength = funStuff(from, left[mid], mid - 1, xs[mid]);
int rightLength = funStuff(mid + 1, right[mid], to, xs[mid]);
int leftStart = from;
int rightStart = mid + 1;
int last = mid - 1;
for (int j = 1; j <= rightLength + 1; j++) {
int frontier = j > 1 ? forward[rightStart + (j - 2)] : mid;
long weight = frontier - last;
last = frontier;
int lastInside = mid + 1;
int leftLast = leftLength == 0 ? mid : reverse[leftStart];
for (int d = 0; d <= 18 && lastInside > leftLast; d++) {
if (1 << d >= j) {
int frontierInside;
if (1 << d == j) {
frontierInside = mid;
} else if (1 << d <= j + leftLength) {
frontierInside = reverse[leftStart + leftLength + j - (1 << d)];
} else {
frontierInside = leftLast;
}
long weightInside = lastInside - frontierInside;
lastInside = frontierInside;
answer += weight * weightInside * ((long) (xs[mid] + d));
}
}
}
forward[leftStart + leftLength] = mid;
System.arraycopy(forward, rightStart, forward, leftStart + leftLength + 1, rightLength);
reverse[leftStart + leftLength] = mid;
System.arraycopy(reverse, rightStart, reverse, leftStart + leftLength + 1, rightLength);
int length = reduceForward(leftStart, leftLength + 1 + rightLength, riseTo - xs[mid]);
reduceReverse(leftStart, leftLength + 1 + rightLength, riseTo - xs[mid]);
return length;
}
public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
n = Integer.parseInt(in.readLine());
StringTokenizer tokenizer = new StringTokenizer(in.readLine());
xs = new int[n];
for (int j = 0; j < n; j++) {
xs[j] = Integer.parseInt(tokenizer.nextToken());
}
left = new int[n];
right = new int[n];
ArrayDeque<Integer> stack = new ArrayDeque<>();
for (int j = 0; j < n; j++) {
while (!stack.isEmpty() && xs[stack.peek()] <= xs[j]) {
left[j] = stack.pop();
}
if (!stack.isEmpty()) {
right[stack.peek()] = j;
}
stack.push(j);
}
while (stack.size() > 1) {
stack.pop();
}
forward = new int[n];
reverse = new int[n];
funStuff(0, stack.pop(), n - 1, Integer.MAX_VALUE);
System.out.println(answer);
}
}