Division Ladders Answers

Let clog denote the common logarithm (base 10).

BRONZE:

  1. For each of the 14 spaces, we place a + or -, giving 2^14 possibilities. Since clog(2^14) is about 4, we can safely try all possibilities (usually the spirit of bronze). The simplest way to do this is recursively, where we recurse a total of 14 times (based on an internal counter), calling a function within itself, and eventually yield a result. For each of the about 1e4 possibilities we yield through this process, we add 1 to an answer variable if it does not exceed X. At the end, we just do ans = X - ans.

  2. Since there are 8! permutations worst case and clog(8!) is between 4 and 5, we again try all possibilities. We can generate permutations recursively (or use next_permutation), storing each one in a vector, simply scanning across at the end to locate the original string's index at the end. Or we can optimize memory by keeping a lexicographically sorted version of the original string (which makes the clog almost 6, still feasible) and then run next_permutation repeatedly, incrementing a counter until we obtain the original string.

  3. This is an ad-hoc instead of raw brute force, substantially harder. You can expect to see ad-hocs on bronze once in a while. A tight upper bound on the answer is given by the bubble sort algorithm. In particular, the "thinnest" permutation range would be a simple swap of two adjacent elements, resolved in 2! - 1 = 1 step. So how can we swap an array if only adjacent swapping of elements is allowed? Astute contestants will immediately realize a bijection to an inversion counting problem, but for the sake of elaboration, we can go further. Define an inversion as a pair of elements (a,b) such that a > b but a precedes b in the array. There are multiple cases to consider. If no two adjacent elements are an inversion pair, the array is sorted -- this is simple, as all adjacent pairs then are restricted to a monotonically increasing behavior. A single adjacent swap can reduce the number of inversions by at most 1 -- swapping two adjacent elements a,b keeps their positions the same inversion-wise compared to all other elements, but we do switch them around; at best this occurs when a,b is an inversion pair (which reduces one inversion). For an array with i inversions, we need to perform at least i adjacent swaps -- a sorted array has no inversions, and in the second case, each swap reduces i by at most 1; therefore, we need at least as many adjacent swaps as inversions to sort the array. i adjacent swaps will always suffice to sort an array with i inversions -- if there are no inversions of adjacent elements, we follow the logic of the first case and our array is already sorted; otherwise, at least one pair of adjacent elements must form an inversion, and a single swap suffices to fix this and reduce inversions to i-1. Inductively, we sort the array completely after exactly i adjacent swaps. Ultimately, the problem reduces to counting inversions, feasible in quadratic time of n in bronze.

SILVER:

  1. The trick here is to invert traditional prefix sums, producing "difference arrays". Formally, if P(A) is the prefix sum array of A, then A is the difference array of P(A). Denote the difference array of A by P^-1(A) -- an inverse function. In the context of this problem, it is much simpler to fill out P^-1(A) and then prefix sum it to get the desired A. To fill out P^-1(A), when A is being painted across with a coat from a left index to a right index, then simply add 1 to the left index of P^-1(A) and deduct 1 AFTER the right index of P^-1(A). This means that in taking the prefix sum of P^-1(A) we effectively sum this 1 across from left to right, adding a coat of paint. In the world of the difference array, this operation is done in O(1). Following this, we can produce a solution in O(Q), summing across in only linear time at the very end to convert from the world of P^-1(A) to the desired A.

  2. This problem seems challenging on its own, so we compare it with its one-dimensional analog. The solution remains the same except for the production of the difference array. We apply principle of inclusion-exclusion geometrically to arrive at 2D prefix sums, which are then inverted for the 2D difference array, and the rest remains unchanged.

GOLD:

  1. The word "minimum" classifies suggests DP, shortest paths, or a greedy algorithm. In this case, it is most feasible to take the shortest paths route, where the path goes from X to Y. Build an implicit graph with nodes as values and transition operations as edges, performing a BFS to get to Y. The weighted case is similar, with BFS changed to Dijkstra (or Bellman-Ford/SPFA for negative weights).

  2. This is a use case of a monotonic deque structure, where the largest relative ordering is the answer.

  3. The word "minimum" again suggests DP, shortest paths, or greedy. Yet there is optimal substructure (imagine building the string and examining the last character at each step), so we proceed with DP (see why greedy fails). The rest is handled via casework.

  4. We have to connect a graph and report connected components. Since repeated DFS is inefficient, we are left with DSU, processed offline in a sorted manner. This amortizes to approximately linear time (the inverse Ackermann factor is almost constant in this case).

Last updated