##### ###### ##### ### # # ### # # ###### ## ## ## ## ## ## ## # # # # # ## ##### #### ##### # # # # # # # #### ## # ## ## ## ## # # # # # ## ## # ###### ## ### # ### # ######
##### ###### ##### ### # # ### # # ###### ## ## ## ## ## ## ## # # # # # ## ##### #### ##### # # # # # # # #### ## # ## ## ## ## # # # # # ## ## # ###### ## ### # ### # ######
##### ###### ##### ### # # ### # # ###### ## ## ## ## ## ## ## # # # # # ## ##### #### ##### # # # # # # # #### ## # ## ## ## ## # # # # # ## ## # ###### ## ### # ### # ######
Apply DP to linear problems. House Robber, Boredom, and Consecutive Subsequence. Each problem uses a 1D state array.
Classic take/skip pattern where you decide whether to take all instances of a number or skip it
Extends Frog 1 with variable jump length K - teaches handling multiple transitions per state.
The classic problem
DP with greedy construction, teaches state optimization on linear arrays
Classic coin-change variant teaching maximization in 1D DP
Greedy-DP hybrid teaching when left/right decisions create 1D DP
Direct application of house robber pattern with frequency counting twist
Index-based recursion and decision making converge in one foundational DP problem
Finds the longest consecutive-value subsequence with DP and reconstructs it.
1D DP on character arrays with constraint-based transitions
Linear DP after sorting, teaches preprocessing for state optimization