{ "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4, "cells": [ { "cell_type": "markdown", "metadata": { "lang": "fr" }, "source": [ "## La méthode du gradient pour résoudre A x = b\n", "\n", "Le but de ce TP est de vous laisser avancer tout seul. Reprenez les cours et programmez la méthode du gradient\n", "pour résoudre le système matriciel $A {\\bf x} = {\\bf b}$ avec A symétrique et à diagonale dominante\n", "($a_{ii} > \\sum_{k \\ne i} |a_{ik}|$).\n", "\n", "* Commencez en 2D avec une matrice 2x2, vérifier que le résultat est bon et tracer la courbe de convergence\n", "* Passez en nxn (on montrera que cela marche avec une matrice 9x9)\n", "\n", "Il peut être intéressant de normaliser la matrice A pour éviter que les calculs explosent." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "def solve_with_gradiant(A, b):\n", " mu = 0.01\n" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[49, 13],\n", " [13, 13]])" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "A = np.random.randint(10, size=(2,2))\n", "A = A + A.T\n", "A += np.diag(A.sum(axis=1))\n", "A" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([62, 26])" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "b = A.sum(axis=1)\n", "b" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "solve_with_gradiant(A,b)" ] }, { "cell_type": "markdown", "metadata": { "lang": "fr" }, "source": [ "## Introduire de l'inertie\n", "\n", "Introduire de l'inertie dans la méthode du gradient. Que constate-t-on ?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "lang": "fr" }, "source": [ "## Valeur optimale de µ\n", "\n", "On note que deux directions de pente sucessives sont orthogonales si le point suivant est le minumum dans\n", "la direction donnée ($\\nabla J ({\\bf x}^k$)).\n", "\n", "$$\\nabla J ({\\bf x}^{k+1})^T \\; \\nabla J ({\\bf x}^k) = 0$$\n", "\n", "#### Démonstration \n", "\n", "\n", "
| \n", "On veut régler µ pour arriver au minimum de J lorsqu'on avance dans la direction $- \\nabla J({\\bf x}^{k})$.\n", "Cela veut dire que la dérivée partielle de $J({\\bf x}^{k+1})$ par rapport à µ doit être\n", "égale à 0 ou bien en faisant apparaitre µ dans l'équation :\n", "\n", "$$\n", "\\frac{\\partial J ({\\bf x}^k - µ \\; \\nabla J ({\\bf x}^k))}{\\partial µ} = 0\n", "$$\n", "\n", "En développant on a\n", "\n", "$$\n", "\\begin{align}\n", "\\frac{\\partial J ({\\bf x}^{k+1})}{\\partial {\\bf x}^{k+1}} \\; \n", "\\frac{\\partial {\\bf x}^{k+1}}{\\partial µ} & = 0 \\\\\n", "J'({\\bf x}^{k+1}) \\, . \\, (- \\nabla J ({\\bf x}^k)) & = 0 \\\\\n", "(A\\, {\\bf x}^{k+1} - b) \\, . \\, \\nabla J ({\\bf x}^k) & = 0 \\quad \\textrm{puisque A est symétrique}\\\\\n", "\\nabla J ({\\bf x}^{k+1}) \\, . \\, \\nabla J ({\\bf x}^k) & = 0 \\quad \\textrm{CQFD}\n", "\\end{align}\n", "$$\n", "\n", "\n", " | \n",
" \n",
" |
|---|