From Certainty to Uncertainty: Foundations of Algorithmic Decision Making
Let us begin this journey into decision making by starting from the comfort of deterministic models and gradually stepping into the uncertainty of stochastic ones. A deterministic model is one where the outcome is completely predictable — the same input always produces the same output, like the mathematical fact that 2 + 2 equals 4. A stochastic model, on the other hand, involves randomness; rolling a die, for example, can yield any value from one to six.
The goal of this discussion is to build the foundations for understanding decision making under uncertainty. At the heart of this problem lies a simple loop: an agent observes an environment and then takes an action based on what it observed. This is known as the observe–act cycle.
Although this loop appears simple, each component (Environment, Agent, Observation, Action) is filled with uncertainty:
- The effects of our actions are uncertain
- The true state of the environment is uncertain
- The behavior of other agents is uncertain
- Even our model of the problem may be uncertain
Handling these uncertainties is central to the field of Artificial Intelligence. This foundational framework appears in real‑world systems such as aircraft collision avoidance, autonomous driving, breast cancer screening, financial consumption modeling, and portfolio allocation.
There are several approaches for building decision‑making agents:
- Explicit Programming: anticipating all scenarios and specifying exactly what the agent should do in each case.
- Supervised Learning: showing the algorithm many examples where the correct answers are already known, allowing it to learn patterns and make predictions on new data.
- Optimization: searching through many possible strategies to find the best one, often by running simulations and evaluating performance.
- Planning: a form of optimization focused on deciding the sequence of steps needed to reach an optimal solution.
- Reinforcement Learning: where the agent learns its decision‑making strategy through direct interaction with the environment.
Going forward, we will focus on decision making through the lenses of Supervised Learning, Optimization, Planning, and Reinforcement Learning.
Comments
Post a Comment