Codeforces Problem Notes: Starting from 2019.4.26

This post is used to record my notes on some of the problems in Codeforces.

D. Fibonacci Sums: We need to use a so-called "canonical" way to represent the decomposition of fibonacci number, and do DP on that representation. "You can imagine Fibonacci coding by following way: i-th bit of number corresponds to the i-th Fibonacci number. For example, 16=13+3 will be written as 100100. You can represent into this code any positive integer, for that no two neighbouring 1-bit will be present."

C3. Brain Network (hard): Keeping the diameter of the tree online. "keep a pair of vertices (u, v)" and compare (u, w), (v, w), (u, v) when w is added.  Pretty nice property of diameter! 

C. Tree Generator™ :This problem needs us to exploit the property of tree diameter/LCA in bracket sequences, which is pretty similar to a solution to solve the unweighted version of QTREE4 in SPOJ. The two solutions uses different interesting observations to convert it into something solvable by the segment tree.
1.   The LCA of two nodes is essentially the shallowest point between those two nodes in the bracket sequences.
2.   Or you can take the substring between the two nodes, for example, "] [ [ ] [ ] ] ] [ ] [ [" , and you take away all of the matched parenthesis between them. It becomes "] ] [ [", which is essentially the same as the path from one node to the other ("]" means going up, and "[" means going down.)
For either of those two ways, we can use a segment tree to maintain the information in the bracket sequence.

C. Fractal Detector:  While rolling hash yields a valid solution for this problem, the one used by the tutorial is more elegant. Fractal is actually a good place to reduce the structure into sub-structure. We can use the doubling technique (like in suffix array construction/ sparse table construction) and use dynamic programming to solve it.

E. Special Segments of Permutation: While the problem here is solvable with standard divide and conquer, we can just do a smart brute force to pass by enumerating every maximum number and choose the smaller side to iterate. The complexity analysis is pretty similar to that of DSU on tree (smaller to large): "Every element will be processed no more than logn times because, if we process it in a segment of size m, the smaller part of it contains no more than m/2 elements (which we will process later, and the smaller part of this segment contains no more than m/4 elements, and so on)."

E. Nuclear Fusion: Bitmask DP here. The DP dp[mask] can be define as "the prefix of elements we have made so far", and we can get a O(3^n + n * 2^n). We can additionally record how much we have covered for the next element, and then we can have a O(n * 2^n) solution.

C. Sasha and Array: Classic problem. There are two ways of solving this problem (both using lazy propagation in the segment tree):
1.   Using the explicit formula of n-th Fibonacci number, the updating is just range mulplication. In this way, we need to use fast exponentiation.
2.   There is another way. Recall that we can get the n-th Fibonacci number by matrix multiplication and we can keep a 1*2 matrix in each node and use matrix exponentiation to maintain the matrix.
Both of the ways have a complexity of O(nlog^2 n)

A. Lengthening Sticks: The most important thing for this problem is realizing that at most one of the three inequality of a triangle can be violated at the same time. That means we can count all the possibilities and substract the violating ones from that.

D. Vanya and Treasure: The key part to figure out for this problem is to see how we can do efficient updates from one level to another. While the solution used some "square root" optimization method, we can also take advantage of the property of Manhattan Distance by (spliting into 4 case and) using some 2D segment tree or just collecting the maximum for each row.

E. Blood Cousins Return: There are two different solutions: one with range queries in the dfs order, another with DSU on tree. The DSU solution is involved with keeping the (depth, set of integers) pairs in each vertex. That can be down by using a deque of sets since in this way we can easily append the new depth level in the front when we move into a new vertex.

E. Selling Souvenirs: For the same kind of souvenirs, we always take the most expensive ones of them. Then we can see the cost function with the number of pieces taken is convex. However, it is noteworthy that when we do ternary search, the function of weight-2 items are not convex if we are using the amount of space left as the input, (the amount of space has a step size of 1 and the weight-2 items has a step size of 2.)

C. Nikita and stack: The key observation of this problem is that we should look at the operation from backwards and calculate the so-called "balanced value" of each operation. It turns out that the element on the top of the stack is the first element (from backwards) that is larger than zero. 

E. Dima and Horses: Unusual solution, which is pretty similar to hill climbing: Start a arbitrary solution to turn things around. It turns out it will ultimately "converge" (i.e, become a valid answer).

D. Happy Tree Party: Basic observation: 1. the order of the division doesn't matter. 2. every number larger than 2 divides the number into halves or less. Then we can skip those "one"s by using a technique like union find.

I will continue add things along the way I solved more problems.


  1. Seriously dude I liked your thoughts about questions.keep posting these kind of content it is actually very good specially after contest thoughts.


Post a Comment

Popular Posts