Dynamic Programming: Tackling Problems in Programming

dynamic programming

Richard Bellman first conceptualized and introduced the concept of dynamic programming (or DP) in 1953, while working for RAND Corporation. In fact, today, a dynamic programming equation is also known as a Bellman equation, named after the inventor of the concept.

While dynamic programming may seem complex and intimidating, the concept can be simplified and will help enormously in tackling an array of problems — called dynamic programming problems. A mathematical idea at heart, DP is an algorithmic approach that can actually make complicated concepts easy to solve and understand.

What is dynamic programming?

First, it’s important to understand that “dynamic” is a bit of a misnomer — there is nothing dynamic about the approach at all! In simplest terms, dynamic programming involves breaking down a more complex problem into smaller components, called sub-problems. Then, solutions to each of these sub-problems are saved in a data structure, array, or arrangement.

This streamlines the process, such that the number of problems in the same class or set can be solved efficiently, without having to repeat the procedure for problems with overlapping sub-problems — each sub-problem only needs to be calculated one time.

Dynamic programming algorithms apply to both mathematical models and the world of computer science. In addition to being divisible into subproblems, the problem must also have an optimal substructure property. This means that there is an optimal solution, one that you can reach by applying the algorithmic technique to the series of sub-problems and storing their solutions to ultimately find that solution.

How does dynamic programming work?

The idea of dynamic programming is that you avoid duplicating your efforts to solve complex problems.

As such, there is the main problem, along with several other sub-problems that branch out from it. You must put them in an order that makes this process more feasible. We can think of it as a sort of tree, with a main problem at the roots and the sub-problems branching out from it. These branches become intertwined, portraying the idea that there is overlap in the approaches and solutions.

The concept of storing solutions to avoid repeating the process is called Memoization. It basically means you can avoid repeating the approach for solving the problems by saving them. Because they are overlapping, they can be resolved through one simple technique, rather than a multi-stage one.

In dynamic programming problems, we can apply a top-down approach or a bottom-up approach. The former involves a recursive strategy that utilizes Memoization, caching results to avoid repetitive problem-solving. The latter involves tabulation. This approach means we rely on a web of pre-solved sub-problems, rather than utilizing Memoization to save the solutions as we solve the problems.

In other words, through the top-down approach, you will check the solutions to sub-problems prior to resolving the main problem, while through a bottom-up approach, you will examine solutions to sub-problems before applying the technique to solving the larger, overarching problem.

How to solve dynamic programming problems

One of the most common dynamic programming examples is that of the fibonacci number sequence. First, let’s take a look at what this is:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 68…

You have probably encountered the Fibonacci sequence before. The idea is that each subsequent number in a series is the sum of the preceding two. In other words:

0 + 1 = 1

1 + 1 = 2

1 + 2 = 3

2 + 3 = 5

3 + 5 = 8

5 + 8 = 13

8 + 13 = 21

And so on. The series continues in this vein.

There are several different formulae for approaching a Fibonacci series. By leveraging a dynamic programming algorithm, you can easily solve the problem.

This is the formula to use: Fib(n) = Fib(n-1) + Fib(n-2), for n > 1

Learn more about the steps involved in a Fibonacci DP algorithm.

Other approaches

Dynamic programming can apply to numerous scenarios, but there are alternative approaches to solving these types of problems.


Recursive algorithms are actually fertile ground for dynamic optimization. That said, not all recursive problems can be solved via a dynamic programming approach. Let’s go back to the Fibonacci sequence problem, in which we applied a dynamic programming example to resolve it. This works because there are overlapping sub-problems — but that is hardly always the case. If there are no sub-problems that overlap, then we must apply a different methodology.

The divide and conquer approach could be the answer, here. This involves first dividing problems into smaller components — sub-problems — before conquering: solving the sub-problems in a recursive, or repetitive, manner. Ultimately, you must add up or combine the sub-problems to formulate a cohesive response or answer. Memoisation is not part of this process, because there are no overlapping sub-problems, and thus the answers cannot be stored and applied comprehensively.

While this is not as efficient as and takes more time than the dynamic programming approach, it is more widely applicable. Merge sort is an example of this programming methodology, automating the resolution of a complex problem.

Greedy algorithms

Like the dynamic programming model, a greedy algorithm is an approach for optimization. Also like DP, a correct greedy solution does not always exist.

The greedy approach is an algorithmic model that constructs a solution via components. Through the paradigm, the most immediately advantageous solution is always chosen. Simply put, the piece that offers the maximum and most optimal benefit will ultimately lead to a global solution. There is no consideration for whether the best solution in a given moment is the best one for the overall results. Sometimes, programmers choose this model because it is very quick to perform, although there may be consequences down the line.

Is dynamic programming always the answer?

In short, no — dynamic programming is not always the answer. As we have underscored, there are some instances — those in which overlapping sub-problems simply do not exist — where the approach cannot be applied. A DP algorithm, in this type of instance, simply isn’t relevant.

That said, this tried and true methodology is an efficient, powerful technique for solving complex problems. It is widely used by expert software developers and computer programmers.

If you are in search of expert developers with deep knowledge of dynamic programming and other approaches to building innovative software, look no further than Nearsure. We are on hand to help you with your staff augmentation needs, no matter what area of IT or software development is your specialty.