2025-07-01
Convex optimisation is an important field of applied mathematics. It involves finding the minimum of a given objective function subject to certain constraints. Optimisation has many applications in machine learning, computational physics, and finance. Examples include minimising risk for portfolio optimisation, minimising an error function to fit a machine learning model and minimising the energy of a system in computational physics.

Target Audience:
Prerequisites:
Note
The exact structure of this course is subject to change and may vary.
This course focuses on optimisation problems of the following form:
Find a minimum of \[ \mathop{\mathrm{arg\,min}}_{\mathbf{x}} \; f(\mathbf{x}) \] subject to \[ \begin{aligned} h(\mathbf{x}) &= 0 \\ g(\mathbf{x}) &\leq 0 \end{aligned} \]
The function \(f\) is called objective function, and \(h\) and \(g\) are constraints of the optimization problem.
A good optimisation algorithm should have the following properties:
Optimisation problems occur in all kinds of applications. Here are some examples:
A common problem in machine learning is classification. You are given a set of points belonging to different classes and need to find a decision boundary to separate them. One possible solution is to fit the decision boundary so that the margin is maximised. This is the approach used by so-called Support Vector Machines (SVM).

If \(w\) is the normal vector of the decision boundary, \(b\) is the ‘bias’ parameter that determines the distance to the origin of the coordinate system and \(t_n\) is the target value (1 if the point \(x_n\) belongs to the class and -1 if it does not), then we need to find:
\[ \mathop{\mathrm{arg\,max}}_{\mathbf{w}, b} \frac{1}{\lVert \mathbf{w} \rVert} \min_{n=1,\dotsc,N} \left\{ t_n \mathbf{w}^\mathrm{T} \mathbf{x}_n + b \right\} \]
. . . Without loss of generality, we can choose w and b so that the margin is scaled to 1. This leads to the following optimization problem:
\[ \begin{aligned} \mathop{\mathrm{arg\,min}}_{\mathbf{w}, b} \frac{1}{2} \lVert \mathbf{w} \rVert^2 \\ \text{s.t.}\quad t_n (\mathbf{w}^\mathrm{T} \mathbf{x}_n + b) \geq 1 \end{aligned} \]
This is an example for a quadratic programming problem, which is quite difficult to solve.
This can be expressed mathematically as follows: Minimize the risk \[ \min_{\mathbf{x}} \mathbf{x}^\mathrm{T} \Sigma \mathbf{x} \]
subject to the constraints that the expected return exceeds a target value \(R_{\text{target}}\) and that the sum of all portfolio weights is 100%.
\[ \begin{aligned} \sum_{i=1}^N x_i &= 1 \\ \sum_{i=1}^N \mathbb{E}[R_i] x_i &\geq R_{\text{target}}. \end{aligned} \]
where:
As the objective function is a quadratic form with a positive semi-definite covariance matrix, this is a convex optimisation problem.
The Poisson equation is one of the simplest differential equations. It describes the electric potential in a capacitor with a given charge density.
\[ \begin{aligned} - \Delta u &= f, && \text{in $\Omega$}, \\ u &= 0, && \text{on $\partial\Omega$} \end{aligned} \]
Here, \(\Omega := [a, b]^2 \subseteq \mathbb{R}^2\) is the domain of the problem, and \(u = 0\) is the (Dirichlet) boundary condition.
Multiplying both sides by a test function v and then applying Green’s identity gives us the weak formulation of the differential equation:
\[ \int_\Omega \nabla u \cdot \nabla v \, \mathrm{d}\mathbf{x} = \int_\Omega f v \,\mathrm{d}x \]
The weak formulation plays an important role in the finite element method. It can also be viewed as the Gateaux differential of an energy functional:
\[ J[u] = \frac{1}{2} \int_\Omega \nabla u \cdot \nabla u \, \mathrm{d}\mathbf{x} - \int_\Omega f v \,\mathrm{d}\mathbf{x} \overset{\text{!}}{=} \min \]
This is a variational problem. The minimum of the energy functional is also a solution of the Poisson equation.