summaryrefslogtreecommitdiff
path: root/PVCM/cama/fr/ma54 Gradient pour résoudre Ax = b -- Exercice.ipynb
blob: 46ec5a872726ab38f48fff64d4bfaeb1acb4f4a3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "lang": "fr"
   },
   "source": [
    "## La méthode du gradient pour résoudre A x = b\n",
    "\n",
    "Le but de ce TP est de vous laisser avancer tout seul. Reprenez les cours et programmez la méthode du gradient\n",
    "pour résoudre le système matriciel $A {\\bf x} = {\\bf b}$ avec A symétrique et à diagonale dominante\n",
    "($a_{ii} > \\sum_{k \\ne i} |a_{ik}|$).\n",
    "\n",
    "* Commencez en 2D avec une matrice 2x2, vérifier que le résultat est bon et tracer la courbe de convergence\n",
    "* Passez en nxn (on montrera que cela marche avec une matrice 9x9)\n",
    "\n",
    "Il peut être intéressant de normaliser la matrice A pour éviter que les calculs explosent."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[0.15, 0.15],\n",
       "       [0.15, 0.55]])"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "A = np.random.randint(10, size=(2,2))\n",
    "A = A + A.T\n",
    "A += np.diag(A.sum(axis=1))\n",
    "A = A / np.sum(np.abs(A))\n",
    "A"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0.3, 0.7])"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "b = A.sum(axis=1)\n",
    "b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "def solve_with_gradiant(A, b):\n",
    "    mu = 0.01\n",
    "    xi = np.array(np.zeros(b.shape))\n",
    "    while True:\n",
    "        xn = xi - mu * (A @ xi - b)\n",
    "        if np.square(xn - xi).sum() < 1e-6 ** 2:\n",
    "            break\n",
    "        xi = xn\n",
    "    return xi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0.9990519 , 1.00031603])"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "solve_with_gradiant(A,b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "lang": "fr"
   },
   "source": [
    "## Introduire de l'inertie\n",
    "\n",
    "Introduire de l'inertie dans la méthode du gradient. Que constate-t-on ?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0.9990519 , 1.00031603])"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def solve_with_gradiant_inertie(A, b):\n",
    "    mu = 0.01\n",
    "    i = 0.5\n",
    "    xi = np.array(np.zeros(b.shape))\n",
    "    while True:\n",
    "        xn = xi - mu * (A @ xi - b)\n",
    "        if np.square(xn - xi).sum() < 1e-6 ** 2:\n",
    "            break\n",
    "        xi = xn\n",
    "    return xi\n",
    "solve_with_gradiant_inertie(A, b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "lang": "fr"
   },
   "source": [
    "## Valeur optimale de µ\n",
    "\n",
    "On note que deux directions de pente sucessives sont orthogonales si le point suivant est le minumum dans\n",
    "la direction donnée ($\\nabla J ({\\bf x}^k$)).\n",
    "\n",
    "$$\\nabla J ({\\bf x}^{k+1})^T \\; \\nabla J ({\\bf x}^k) = 0$$\n",
    "\n",
    "#### Démonstration \n",
    "\n",
    "\n",
    "<table><tr>\n",
    "    <th>\n",
    "On veut régler µ pour arriver au minimum de J lorsqu'on avance dans la direction $- \\nabla J({\\bf x}^{k})$.\n",
    "Cela veut dire que la dérivée partielle de $J({\\bf x}^{k+1})$ par rapport à µ doit être\n",
    "égale à 0 ou bien en faisant apparaitre µ dans l'équation :\n",
    "\n",
    "$$\n",
    "\\frac{\\partial J ({\\bf x}^k - µ \\; \\nabla J ({\\bf x}^k))}{\\partial µ} = 0\n",
    "$$\n",
    "\n",
    "En développant on a\n",
    "\n",
    "$$\n",
    "\\begin{align}\n",
    "\\frac{\\partial J ({\\bf x}^{k+1})}{\\partial {\\bf x}^{k+1}} \\; \n",
    "\\frac{\\partial {\\bf x}^{k+1}}{\\partial µ} & = 0   \\\\\n",
    "J'({\\bf x}^{k+1}) \\, . \\, (- \\nabla J ({\\bf x}^k)) & = 0 \\\\\n",
    "(A\\, {\\bf x}^{k+1}  - b) \\, . \\, \\nabla J ({\\bf x}^k) & = 0 \\quad \\textrm{puisque A est symétrique}\\\\\n",
    "\\nabla J ({\\bf x}^{k+1})  \\, . \\, \\nabla J ({\\bf x}^k) & = 0 \\quad \\textrm{CQFD}\n",
    "\\end{align}\n",
    "$$\n",
    "\n",
    "\n",
    "</th><th>\n",
    "<img src=\"images/gradient.png\" width = \"400px\"/>\n",
    "</th></tr></table>\n",
    "\n",
    "En utilisant cette propriété, évaluer la valeur optimale de µ pour atteindre le minimum dans la direction de\n",
    "$\\nabla J ({\\bf x}^k)$.\n",
    "\n",
    "Écrire le méthode du gradient avec le calcul du µ optimal à chaque itération pour résoudre $A {\\bf x} = {\\bf b}$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0.99999941, 0.99999941])"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def solve_with_gradiant_inertie_opti(A, b):\n",
    "    mu = 0.01\n",
    "    i = 0.5\n",
    "    xi = np.array(np.zeros(b.shape))\n",
    "    while True:\n",
    "        J = A @ xi - b\n",
    "        mu = np.dot(J, J) / np.dot(A @ J, J)\n",
    "        xn = xi - mu * (A @ xi - b)\n",
    "        if np.square(xn - xi).sum() < 1e-6 ** 2:\n",
    "            break\n",
    "        xi = xn\n",
    "    return xi\n",
    "solve_with_gradiant_inertie_opti(A,b)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}