diff --git a/docs/algorithms/lu-solver.md b/docs/algorithms/lu-solver.md index 602c08fdfd..5ccf763072 100644 --- a/docs/algorithms/lu-solver.md +++ b/docs/algorithms/lu-solver.md @@ -207,13 +207,11 @@ $\boldsymbol{l}_k$ and $\boldsymbol{u}_k$ ($k < p$) denote sections of the matri [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination) using [LU decomposition](https://en.wikipedia.org/wiki/LU_decomposition) constructs the matrices -$$ \begin{align*} \mathbf{L}_p &= \begin{bmatrix} 1 && \boldsymbol{0}^T \\ m_p^{-1} \boldsymbol{q}_p && \mathbf{1}_p\end{bmatrix} \\ \mathbf{U}_p &= \begin{bmatrix} m_p && \boldsymbol{r}_p^T \\\boldsymbol{0} && \mathbf{1}_p\end{bmatrix} \\ \mathbf{M}_{p+1} &= \hat{\mathbf{M}}_p - m_p^{-1} \boldsymbol{q}_p \boldsymbol{r}_p^T \end{align*} -$$ where $\mathbf{1}$ is the matrix with ones on the diagonal and zeros off-diagonal, and $\mathbf{M}_{p+1}$ is the start of the next iteration. @@ -253,7 +251,6 @@ dimensions $\left(N-p-1\right) \times \left(N-p-1\right)$. Iterating this process yields the matrices -$$ \begin{align*} \mathbf{L} = \begin{bmatrix} 1 && 0 && \cdots && 0 \\ @@ -278,7 +275,6 @@ m_0 && \left(\boldsymbol{r}_0^T\right)_0 && \cdots && \left(\boldsymbol{r}_0^T\r 0 && \cdots && 0 && m_{N-1} \end{bmatrix} \end{align*} -$$ in which $\boldsymbol{l}_p$ is the first column of the lower triangle of $\mathbf{L}_p$ and $\boldsymbol{u}_p^T$ is the first row of the upper triangle of $\mathbf{U}_p$. @@ -376,7 +372,6 @@ confusion with the block-sparse matrix components. The matrices constructed with [LU decomposition](https://en.wikipedia.org/wiki/LU_decomposition) for [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination) are constructed accordingly. -$$ \begin{align*} \mathbf{L}_p &= \begin{bmatrix} \mathbf{1} && \mathbf{0}^T \\ @@ -388,7 +383,6 @@ $$ \end{bmatrix} \\ \mathbf{M}_{p+1} &= \hat{\mathbf{M}}_p - \widehat{\mathbf{m}_p^{-1}\mathbf{q}_p \mathbf{r}_p^T} \end{align*} -$$ Here, $\overrightarrow{\mathbf{m}_p^{-1}\mathbf{q}_p}$ is symbolic notation for the block-vector of solutions to the equation $\mathbf{m}_p x_{p;k} = \mathbf{q}_{p;k}$, where $k = 0..(p-1)$. @@ -396,7 +390,6 @@ Similarly, $\widehat{\mathbf{m}_p^{-1}\mathbf{q}_p \mathbf{r}_p^T}$ is symbolic solutions to the equation $\mathbf{m}_p x_{p;k,l} = \mathbf{q}_{p;k} \mathbf{r}_{p;l}^T$, where $k,l = 0..(p-1)$. That is: -$$ \begin{align*} \overrightarrow{\mathbf{m}_p^{-1}\mathbf{q}_p} &= \begin{bmatrix}\mathbf{m}_p^{-1}\mathbf{q}_{p;0} \\ @@ -409,7 +402,6 @@ $$ \mathbf{m}_p^{-1} \mathbf{q}_{p;0} \mathbf{r}_{p;0}^T && \cdots && \mathbf{m}_p^{-1} \mathbf{q}_{p;N-1} \mathbf{r}_{p;N-1}^T \end{bmatrix} \end{align*} -$$ Iteratively applying above factorization process yields $\mathbf{L}$ and $\mathbf{U}$, as well as $\mathbf{P}$ and $\mathbf{Q}$. @@ -435,7 +427,6 @@ $\mathbf{m}_p$ is a dense block that can be [LU factorized](#dense-lu-factorizat $\mathbf{m}_p = \mathbf{p}_p^{-1} \mathbf{l}_p \mathbf{u}_p \mathbf{q}_p^{-1}$. Partial Gaussian elimination constructs the following matrices. -$$ \begin{align*} \mathbf{L}_p &= \begin{bmatrix} \mathbf{l}_p && \mathbf{0}^T \\ @@ -448,7 +439,6 @@ $$ \mathbf{M}_{p+1} &= \hat{\mathbf{M}}_p - \widehat{\mathbf{q}_p\mathbf{q}_p\mathbf{u}_p^{-1}\mathbf{l}_p^{-1}\mathbf{p}_p\mathbf{r}_p^T} \end{align*} -$$ Note that the first column of $\mathbf{L}_p$ can be obtained by applying a right-solve procedure, instead of the regular left-solve procedure, as is the case for $\mathbf{U}_p$. @@ -457,7 +447,6 @@ left-solve procedure, as is the case for $\mathbf{U}_p$. To illustrate the rationale, let's fully solve a matrix equation (without using blocks): -$$ \begin{align*} \begin{bmatrix} \mathbf{a} && \mathbf{b} \\ @@ -511,12 +500,10 @@ a_{11} && a_{12} && b_{11} && b_{12} \\ \frac{c_{22} - a_{12} \frac{c_{21}}{a_{11}}}{a_{22} - a_{12} \frac{a_{21}}{a_{11}}} \end{bmatrix} \end{align*} -$$ Using the following denotations, we can simplify the above as $\begin{bmatrix} \mathbf{l}_a \mathbf{u}_a && \mathbf{u}_b \\ \mathbf{l}_c && \mathbf{l}_d \mathbf{u}_d \end{bmatrix}$. -$$ \begin{align*} \mathbf{l}_a &= \begin{bmatrix} 1 && 0 \\ @@ -549,7 +536,6 @@ $$ \frac{c_{22} - a_{12} \frac{c_{21}}{a_{11}}}{a_{22} - a_{12} \frac{a_{21}}{a_{11}}} \end{bmatrix} \end{align*} -$$ Interestingly, the matrices $\mathbf{l}_c$, $\mathbf{u}_b$ and $\mathbf{l}_d\mathbf{u}_d$ can be obtained without doing full pivoting on the sub-block level: @@ -582,7 +568,6 @@ The following proves the equations $\mathbf{l}_c \mathbf{u}_a = \mathbf{c}$, $\m and $\mathbf{l}_d\mathbf{u}_d = \mathbf{d} - \mathbf{l}_c \mathbf{u}_b$ mentioned in the [previous section](#rationale-of-the-block-sparse-lu-factorization-process). -$$ \begin{align*} \mathbf{l}_c \mathbf{u}_a &= @@ -656,7 +641,6 @@ $$ \end{bmatrix} \\ &= \mathbf{l}_d\mathbf{u}_d \end{align*} -$$ We can see that $\mathbf{l}_c$ and $\mathbf{u}_b$ are affected by the in-block LU decomposition of the pivot block $\left(\mathbf{l}_a,\mathbf{u}_a\right)$, as well as the data in the respective blocks ($\mathbf{c}$ and $\mathbf{b}$) @@ -674,7 +658,6 @@ The structure of the block-sparse matrices is as follows. This can be graphically represented as -$$ \begin{align*} \mathbf{M} &\equiv \begin{bmatrix} \mathbf{M}_{0,0} && \cdots && \mathbf{M}_{0,N-1} \\ @@ -701,7 +684,6 @@ $$ \mathbf{M}_{N-1,N-1}\left[N_i-1,0\right] && \cdots && \mathbf{M}_{N-1,N-1}\left[N_i-1,N_j-1\right] \end{bmatrix} \end{bmatrix} \end{align*} -$$ Because of the sparse structure and the fact that all $\mathbf{M}_{i,j}$ have the same shape, it is much more efficient to store the blocks $\mathbf{M}_{i,j}$ in a vector $\mathbf{M}_{\tilde{k}}$ where $\tilde{k}$ is a reordered index from @@ -711,7 +693,6 @@ by the row-index $i$, and the inner vector containing the values of $j$ for whic All topologically relevant matrix elements, as well as [fill-ins](#pivot-operations), are included in this mapping. The following illustrates this mapping. -$$ \begin{align*} \begin{bmatrix} \mathbf{M}_{0,0} && && && \mathbf{M}_{0,3} \\ @@ -749,7 +730,6 @@ $$ [0 && && 2 && && 4 && && && 7 && && && && 10] \end{bmatrix} \end{align*} -$$ In the first equation, the upper row contains the present block entries and the bottom row their column indices per row to obtain a flattened representation of the matrix. @@ -875,17 +855,17 @@ as well as the well-known Let $\mathbf{M}$ be the matrix, $\left\|\mathbf{M}\right\|_{\infty ,\text{bwod}}$ the [block-wise off-diagonal infinite norm](#block-wise-off-diagonal-infinite-matrix-norm) of the matrix. -1. $\epsilon \gets \text{perturbation_threshold} * \left\|\mathbf{M}\right\|_{\text{bwod}}$. -2. If $|\text{pivot_element}| \lt \epsilon$, then: - 1. If $|\text{pivot_element}| = 0$, then: - 1. $\text{phase_shift} \gets 1$. +1. $\epsilon \gets \text{perturbation\_threshold} * \left\|\mathbf{M}\right\|_{\text{bwod}}$. +2. If $|\text{pivot\_element}| \lt \epsilon$, then: + 1. If $|\text{pivot\_element}| = 0$, then: + 1. $\text{phase\_shift} \gets 1$. 2. Proceed. 2. Else: - 1. $\text{phase_shift} \gets \text{pivot_element} / |\text{pivot_element}|$. + 1. $\text{phase\_shift} \gets \text{pivot\_element} / |\text{pivot\_element}|$. 2. Proceed. - 3. $\text{pivot_element} \gets \epsilon * \text{phase_shift}$. + 3. $\text{pivot\_element} \gets \epsilon * \text{phase\_shift}$. -$\text{phase_shift}$ ensures that the sign (if $\mathbf{M}$ is a real matrix) or complex phase (if $\mathbf{M}$ is a +$\text{phase\_shift}$ ensures that the sign (if $\mathbf{M}$ is a real matrix) or complex phase (if $\mathbf{M}$ is a complex matrix) of the pivot element is preserved. The positive real axis is used as a fallback when the pivot element is identically zero. @@ -915,7 +895,7 @@ Solving for $\boldsymbol{\Delta x}$ and substituting back into $\boldsymbol{x}_{i+1} = \boldsymbol{x}_i + \boldsymbol{\Delta x}$ provides the next best approximation $\boldsymbol{x}_{i+1}$ for $\boldsymbol{x}$. -A measure for the quality of the approximation is given by the $\text{backward_error}$ (see also +A measure for the quality of the approximation is given by the $\text{backward\_error}$ (see also [backward error formula](#backward-error-calculation)). Since the matrix $\mathbf{M}$ remains static during this process, the LU decomposition is valid throughout the process. @@ -928,11 +908,11 @@ algorithm is as follows: 1. Initialize: 1. Set the initial estimate: $\boldsymbol{x}_{\text{est}} = \boldsymbol{0}$. 2. Set the initial residual: $\boldsymbol{r} \gets \boldsymbol{b}$. - 3. Set the initial backward error: $\text{backward_error} = \infty$. + 3. Set the initial backward error: $\text{backward\_error} = \infty$. 4. Set the number of iterations to 0. 2. Iteratively refine; loop: 1. Check stop criteria: - 1. If $\text{backward_error} \leq \epsilon$, then: + 1. If $\text{backward\_error} \leq \epsilon$, then: 1. Convergence reached: stop the refinement process. 2. Else, if the number of iterations > maximum allowed amount of iterations, then: 1. Convergence not reached; iterative refinement not possible: raise a sparse matrix error. @@ -967,21 +947,19 @@ We use the following backward error calculation, inspired by [Li99](https://www.semanticscholar.org/paper/A-Scalable-Sparse-Direct-Solver-Using-Static-Li-Demmel/7ea1c3360826ad3996f387eeb6d70815e1eb3761), with a few modifications described [below](#improved-backward-error-calculation). -$$ \begin{align*} D_{\text{max}} &= \max_i\left\{ \left(\left|\mathbf{M}\right|\cdot\left|\boldsymbol{x}\right| + \left|\boldsymbol{b}\right|\right)_i \right\} \\ -\text{backward_error} &= \max_i \left\{ +\text{backward\_error} &= \max_i \left\{ \frac{\left|\boldsymbol{r}\right|_i}{ \max\left\{ \left(\left|\mathbf{M}\right|\cdot\left|\boldsymbol{x}\right| + \left|\boldsymbol{b}\right|\right)_i, - \epsilon_{\text{backward_error}} D_{\text{max}} + \epsilon_{\text{backward\_error}} D_{\text{max}} \right\} } \right\} \end{align*} -$$ $\epsilon \in \left[0, 1\right]$ is a value that introduces a [cut-off value to improve stability](#improved-backward-error-calculation) of the algorithm and should ideally be small. @@ -1006,7 +984,7 @@ contains an early-out criterion for the [iterative refinement](#iterative-refine for diminishing backward error in consecutive iterations. It amounts to (in reverse order): -1. If $\text{backward_error} \gt \frac{1}{2}\text{last_backward_error}$, then: +1. If $\text{backward\_error} \gt \frac{1}{2}\text{last\_backward\_error}$, then: 1. Stop iterative refinement. 2. Else: 1. Go to next refinement iteration. @@ -1033,9 +1011,8 @@ their sum is prone to rounding errors, which may be several orders larger than m uses the following backward error in the [iterative refinement algorithm](#iterative-refinement-of-lu-solver-solutions): -$$ \begin{align*} -\text{backward_error}_{\text{Li}} +\text{backward\_error}_{\text{Li}} &= \max_i \frac{ \left|\boldsymbol{r}_i\right| }{ @@ -1052,7 +1029,6 @@ $$ \left(\left|\mathbf{M}\right| \cdot \left|\boldsymbol{x}\right| + \left|\boldsymbol{b}\right|\right)_i } \end{align*} -$$ In this equation, the symbolic notation $\left|\mathbf{M}\right|$ and $\left|\boldsymbol{x}\right|$ are the matrix and vector with absolute values of the elements of $\mathbf{M}$ and $\boldsymbol{x}$ as elements, i.e., @@ -1065,21 +1041,19 @@ iterative refinement to fail. The power grid model therefore uses a modified version, in which the denominator is capped to a minimum value, determined by the maximum across all denominators: -$$ \begin{align*} D_{\text{max}} &= \max_i\left\{ \left(\left|\mathbf{M}\right|\cdot\left|\boldsymbol{x}\right| + \left|\boldsymbol{b}\right|\right)_i \right\} \\ -\text{backward_error} &= \max_i \left\{ +\text{backward\_error} &= \max_i \left\{ \frac{\left|\boldsymbol{r}\right|_i}{ \max\left\{ \left(\left|\mathbf{M}\right|\cdot\left|\boldsymbol{x}\right| + \left|\boldsymbol{b}\right|\right)_i, - \epsilon_{\text{backward_error}} D_{\text{max}} + \epsilon_{\text{backward\_error}} D_{\text{max}} \right\} } \right\} \end{align*} -$$ $\epsilon$ may be chosen. $\epsilon = 0$ means no cut-off, while $\epsilon = 1$ means that only the absolute values of the residuals are @@ -1122,24 +1096,24 @@ with dimensions $N_i\times N_j$. 1. $\text{norm} \gets 0$. 2. Loop over all block-rows: $i = 0..(N-1)$: - 1. $\text{row_norm} \gets 0$. + 1. $\text{row\_norm} \gets 0$. 2. Loop over all block-columns: $j = 0..(N-1)$ (beware of sparse structure): 1. If $i = j$, then: 1. Skip this block: continue with the next block-column. 2. Else, calculate the $L_{\infty}$ norm of the current block and add to the current row norm: 1. the current block: $\mathbf{M}_{i,j} \gets \mathbf{M}\left[i,j\right]$. - 2. $\text{block_norm} \gets 0$. + 2. $\text{block\_norm} \gets 0$. 3. Loop over all rows of the current block: $k = 0..(N_{i,j} - 1)$: - 1. $\text{block_row_norm} \gets 0$. + 1. $\text{block\_row\_norm} \gets 0$. 2. Loop over all columns of the current block: $l = 0..(N_{i,j} - 1)$: - 1. $\text{block_row_norm} \gets \text{block_row_norm} + \left\|\mathbf{M}_{i,j}\left[k,l\right]\right\|$. + 1. $\text{block\_row\_norm} \gets \text{block\_row\_norm} + \left\|\mathbf{M}_{i,j}\left[k,l\right]\right\|$. 3. Calculate the new block norm: set - $\text{block_norm} \gets \max\left\{\text{block_norm}, \text{block_row_norm}\right\}$. + $\text{block\_norm} \gets \max\left\{\text{block\_norm}, \text{block\_row\_norm}\right\}$. 4. Continue with the next row of the current block. - 4. $\text{row_norm} \gets \text{row_norm} + \text{block_norm}$. + 4. $\text{row\_norm} \gets \text{row\_norm} + \text{block\_norm}$. 5. Continue with the next block-column. 3. Calculate the new norm: set - $\text{norm} \gets \max\left\{\text{norm}, \text{row_norm}\right\}$. + $\text{norm} \gets \max\left\{\text{norm}, \text{row\_norm}\right\}$. 4. Continue with the next block-row. ##### Illustration of the block-wise off-diagonal infinite matrix norm calculation diff --git a/docs/user_manual/calculations.md b/docs/user_manual/calculations.md index 0bb9ba6385..93b7034919 100644 --- a/docs/user_manual/calculations.md +++ b/docs/user_manual/calculations.md @@ -316,48 +316,46 @@ $$ Where: -$$ - \begin{eqnarray} - x = \begin{bmatrix} - \delta \\ - U - \end{bmatrix} = - \begin{bmatrix} - \delta_2 \\ - \vdots \\ - \delta_N \\ - U_2 \\ - \vdots \\ - U_N - \end{bmatrix} - \quad\text{and}\quad - y = \begin{bmatrix} - P \\ - Q - \end{bmatrix} = - \begin{bmatrix} - P_2 \\ - \vdots \\ - P_N \\ - Q_2 \\ - \vdots \\ - Q_N - \end{bmatrix} - \quad\text{and}\quad - f(x) = \begin{bmatrix} - P(x) \\ - Q(x) - \end{bmatrix} = - \begin{bmatrix} - P_{2}(x) \\ - \vdots \\ - P_{N}(x) \\ - Q_{2}(x) \\ - \vdots \\ - Q_{N}(x) - \end{bmatrix} - \end{eqnarray} -$$ +\begin{align*} + x = \begin{bmatrix} + \delta \\ + U + \end{bmatrix} = + \begin{bmatrix} + \delta_2 \\ + \vdots \\ + \delta_N \\ + U_2 \\ + \vdots \\ + U_N + \end{bmatrix} + \quad\text{and}\quad + y = \begin{bmatrix} + P \\ + Q + \end{bmatrix} = + \begin{bmatrix} + P_2 \\ + \vdots \\ + P_N \\ + Q_2 \\ + \vdots \\ + Q_N + \end{bmatrix} + \quad\text{and}\quad + f(x) = \begin{bmatrix} + P(x) \\ + Q(x) + \end{bmatrix} = + \begin{bmatrix} + P_{2}(x) \\ + \vdots \\ + P_{N}(x) \\ + Q_{2}(x) \\ + \vdots \\ + Q_{N}(x) + \end{bmatrix} +\end{align*} As can be seen in the equations above $\delta_1$ and $V_1$ are omitted, because they are known for the slack bus. In each iteration $i$ the following equation is solved: @@ -469,16 +467,14 @@ When it is approximated as a current at 1 p.u., the voltage error is only linear Weighted least squares (WLS) state estimation can be performed with power-grid-model. Given a grid with $N_b$ buses the state variable column vector is defined as below. -$$ - \begin{eqnarray} - \underline{U} = \begin{bmatrix} - \underline{U}_1 \\ - \underline{U}_2 \\ - \vdots \\ - \underline{U}_{N_{b}} - \end{bmatrix} - \end{eqnarray} -$$ +\begin{align*} + \underline{U} = \begin{bmatrix} + \underline{U}_1 \\ + \underline{U}_2 \\ + \vdots \\ + \underline{U}_{N_{b}} + \end{bmatrix} +\end{align*} Where $\underline{U}_i$ is the complex voltage phasor of the i-th bus. @@ -493,37 +489,35 @@ $$ Where: -$$ - \begin{eqnarray} - \underline{x} = \begin{bmatrix} - \underline{x}_1 \\ - \underline{x}_2 \\ - \vdots \\ - \underline{x}_{N_{m}} - \end{bmatrix} = - f(\underline{U}) - \quad\text{and}\quad - \underline{z} = \begin{bmatrix} - \underline{z}_1 \\ - \underline{z}_2 \\ - \vdots \\ - \underline{z}_{N_{m}} - \end{bmatrix} - \quad\text{and}\quad - W = \Sigma^{-1} = \begin{bmatrix} - \sigma_1^2 & 0 & \cdots & 0 \\ - 0 & \sigma_2^2 & \cdots & 0 \\ - \vdots & \vdots & \ddots & \vdots \\ - 0 & 0 & \cdots & \sigma_{N_{m}}^2 - \end{bmatrix} ^{-1} = - \begin{bmatrix} - w_1 & 0 & \cdots & 0 \\ - 0 & w_2 & \cdots & 0 \\ - \vdots & \vdots & \ddots & \vdots \\ - 0 & 0 & \cdots & w_{N_{m}} - \end{bmatrix} - \end{eqnarray} -$$ +\begin{align*} + \underline{x} = \begin{bmatrix} + \underline{x}_1 \\ + \underline{x}_2 \\ + \vdots \\ + \underline{x}_{N_{m}} + \end{bmatrix} = + f(\underline{U}) + \quad\text{and}\quad + \underline{z} = \begin{bmatrix} + \underline{z}_1 \\ + \underline{z}_2 \\ + \vdots \\ + \underline{z}_{N_{m}} + \end{bmatrix} + \quad\text{and}\quad + W = \Sigma^{-1} = \begin{bmatrix} + \sigma_1^2 & 0 & \cdots & 0 \\ + 0 & \sigma_2^2 & \cdots & 0 \\ + \vdots & \vdots & \ddots & \vdots \\ + 0 & 0 & \cdots & \sigma_{N_{m}}^2 + \end{bmatrix} ^{-1} = + \begin{bmatrix} + w_1 & 0 & \cdots & 0 \\ + 0 & w_2 & \cdots & 0 \\ + \vdots & \vdots & \ddots & \vdots \\ + 0 & 0 & \cdots & w_{N_{m}} + \end{bmatrix} +\end{align*} Where $\underline{x}_i$ is the real value of the i-th measured quantity in complex form, $\underline{z}_i$ is the i-th measured value in complex form, $\sigma_i$ is the normalized standard deviation of the measurement error of the i-th @@ -616,10 +610,22 @@ See also [the full mathematical workout](https://github.com/PowerGridModel/power $$ \begin{eqnarray} - & \mathrm{Re}\left\{I\right\} = I \cos\theta \\ - & \mathrm{Im}\left\{I\right\} = I \sin\theta \\ + & \mathrm{Re}\left\{I\right\} = I \cos\theta + \end{eqnarray} +$$ +$$ + \begin{eqnarray} + & \mathrm{Im}\left\{I\right\} = I \sin\theta + \end{eqnarray} +$$ +$$ + \begin{eqnarray} & \text{Var}\left(\mathrm{Re}\left\{I\right\}\right) = - \sigma_i^2 \cos^2\theta + I^2 \sigma_{\theta}^2\sin^2\theta \\ + \sigma_i^2 \cos^2\theta + I^2 \sigma_{\theta}^2\sin^2\theta + \end{eqnarray} +$$ +$$ + \begin{eqnarray} & \text{Var}\left(\mathrm{Im}\left\{I\right\}\right) = \sigma_i^2 \sin^2\theta + I^2 \sigma_{\theta}^2\cos^2\theta \end{eqnarray} diff --git a/docs/user_manual/components.md b/docs/user_manual/components.md index 3cdb4b8580..d01fca3802 100644 --- a/docs/user_manual/components.md +++ b/docs/user_manual/components.md @@ -149,12 +149,10 @@ scenarios within a batch are not three-phase faults (i.e. `fault_type` is not `F `line` is described by a $\pi$ model, where -$$ - \begin{eqnarray} - & Z_{\text{series}} = r + \mathrm{j}x \\ - & Y_{\text{shunt}} = \frac{2 \pi fc}{\tan \delta +\mathrm{j}} - \end{eqnarray} -$$ +\begin{align*} + & Z_{\text{series}} = r + \mathrm{j}x \\ + & Y_{\text{shunt}} = \frac{2 \pi fc}{\tan \delta +\mathrm{j}} +\end{align*} ### Link @@ -236,23 +234,18 @@ for detailed explanation. `transformer` is described by a $\pi$ model, where $Z_{\text{series}}$ can be computed as -$$ - \begin{eqnarray} - & |Z_{\text{series}}| = u_k*z_{\text{base,transformer}} \\ - & \mathrm{Re}(Z_{\text{series}}) = \frac{p_k}{s_n}*z_{\text{base,transformer}} \\ - & \mathrm{Im}(Z_{\text{series}}) = \sqrt{|Z_{\text{series}}|^2-\mathrm{Re}(Z_{\text{series}})^2} \\ - \end{eqnarray} -$$ - +\begin{align*} + & |Z_{\text{series}}| = u_k*z_{\text{base,transformer}} \\ + & \mathrm{Re}(Z_{\text{series}}) = \frac{p_k}{s_n}*z_{\text{base,transformer}} \\ + & \mathrm{Im}(Z_{\text{series}}) = \sqrt{|Z_{\text{series}}|^2-\mathrm{Re}(Z_{\text{series}})^2} \\ +\end{align*} and $Y_{\text{shunt}}$ can be computed as -$$ - \begin{eqnarray} - & |Y_{\text{shunt}}| = i_0*y_{\text{base,transformer}} \\ - & \mathrm{Re}(Y_{\text{shunt}}) = \frac{s_n}{p_0}*y_{\text{base,transformer}} \\ - & \mathrm{Im}(Y_{\text{shunt}}) = -\sqrt{|Y_{\text{shunt}}|^2-\mathrm{Re}(Y_{\text{shunt}})^2} \\ - \end{eqnarray} -$$ +\begin{align*} + & |Y_{\text{shunt}}| = i_0*y_{\text{base,transformer}} \\ + & \mathrm{Re}(Y_{\text{shunt}}) = \frac{s_n}{p_0}*y_{\text{base,transformer}} \\ + & \mathrm{Im}(Y_{\text{shunt}}) = -\sqrt{|Y_{\text{shunt}}|^2-\mathrm{Re}(Y_{\text{shunt}})^2} \\ +\end{align*} where $z_{\text{base,transformer}} = 1 / y_{\text{base,transformer}} = {u_{\text{2}}}^2 / s_{\text{n}}$. @@ -308,13 +301,11 @@ Asymmetric calculation is not supported for `generic_branch`. `generic_branch` is described by a PI model, where -$$ - \begin{eqnarray} - & Y_{\text{series}} = \frac{1}{r + \mathrm{j}x} \\ - & Y_{\text{shunt}} = g + \mathrm{j}b \\ - & N = k \cdot e^{\mathrm{j} \theta} \\ - \end{eqnarray} -$$ +\begin{align*} + & Y_{\text{series}} = \frac{1}{r + \mathrm{j}x} \\ + & Y_{\text{shunt}} = g + \mathrm{j}b \\ + & N = k \cdot e^{\mathrm{j} \theta} +\end{align*} ### Asym Line @@ -630,25 +621,21 @@ Its value can be computed using following equations: * for positive sequence, -$$ - \begin{eqnarray} - & z_{\text{source}} = \frac{s_{\text{base}}}{s_k} \\ - & x_1 = \frac{z_{\text{source}}}{\sqrt{1+ \left(\frac{r}{x}\right)^2}} \\ - & r_1 = x_1 \cdot \left(\frac{r}{x}\right) - \end{eqnarray} -$$ +\begin{align*} + & z_{\text{source}} = \frac{s_{\text{base}}}{s_k} \\ + & x_1 = \frac{z_{\text{source}}}{\sqrt{1+ \left(\frac{r}{x}\right)^2}} \\ + & r_1 = x_1 \cdot \left(\frac{r}{x}\right) +\end{align*} where $s_{\text{base}}$ is a constant value determined by the solver, and $\frac{r}{x}$ indicates `rx_ratio` as input. * for zero-sequence, -$$ - \begin{eqnarray} - & z_{\text{source,0}} = z_{\text{source}} \cdot \frac{z_0}{z_1} \\ - & x_0 = \frac{z_{\text{source,0}}}{\sqrt{1 + \left(\frac{r}{x}\right)^2}} \\ - & r_0 = x_0 \cdot \left(\frac{r}{x}\right) - \end{eqnarray} -$$ +\begin{align*} + & z_{\text{source,0}} = z_{\text{source}} \cdot \frac{z_0}{z_1} \\ + & x_0 = \frac{z_{\text{source,0}}}{\sqrt{1 + \left(\frac{r}{x}\right)^2}} \\ + & r_0 = x_0 \cdot \left(\frac{r}{x}\right) +\end{align*} ### Generic Load and Generator @@ -815,12 +802,10 @@ For other calculation types, sensor output is undefined. `generic_voltage_sensor` is modeled by following equations: -$$ - \begin{eqnarray} - & u_{\text{residual}} = u_{\text{measured}} - u_{\text{state}} \\ - & \theta_{\text{residual}} = \theta_{\text{measured}} - \theta_{\text{state}} \pmod{2 \pi} - \end{eqnarray} -$$ +\begin{align*} + & u_{\text{residual}} = u_{\text{measured}} - u_{\text{state}} \\ + & \theta_{\text{residual}} = \theta_{\text{measured}} - \theta_{\text{state}} \pmod{2 \pi} +\end{align*} The $\pmod{2\pi}$ is handled such that $-\pi \lt \theta_{\text{angle},\text{residual}} \leq \pi$. @@ -930,12 +915,10 @@ For other calculation types, sensor output is undefined. `Generic Power Sensor` is modeled by following equations: -$$ - \begin{eqnarray} - & p_{\text{residual}} = p_{\text{measured}} - p_{\text{state}} \\ - & q_{\text{residual}} = q_{\text{measured}} - q_{\text{state}} - \end{eqnarray} -$$ +\begin{align*} + & p_{\text{residual}} = p_{\text{measured}} - p_{\text{state}} \\ + & q_{\text{residual}} = q_{\text{measured}} - q_{\text{state}} +\end{align*} ### Generic Current Sensor @@ -1036,7 +1019,7 @@ $$ Current sensors with `angle_measurement_type` equal to `AngleMeasurementType.local_angle` measure the phase shift between the voltage and the current phasor, i.e., -$\text{i_angle_measured} = \text{voltage_phase} - \text{current_phase}$. +$\begin{equation}i\_angle\_measured = voltage\_phase - current\_phase\end{equation}$. As a result, the global current phasor depends on the local voltage phase offset and is obtained using the following formula. @@ -1051,14 +1034,12 @@ As a result, the local angle current sensors have a different sign convention fr ##### Residuals -$$ - \begin{eqnarray} - & i_{\text{residual}} - = i_{\text{measured}} - i_{\text{state}} && \\ - & i_{\text{angle},\text{residual}} - = i_{\text{angle},\text{measured}} - i_{\text{angle},\text{state}} \pmod{2 \pi} - \end{eqnarray} -$$ +\begin{align*} + & i_{\text{residual}} + = i_{\text{measured}} - i_{\text{state}} \\ + & i_{\text{angle},\text{residual}} + = i_{\text{angle},\text{measured}} - i_{\text{angle},\text{state}} \pmod{2 \pi} +\end{align*} The $\pmod{2\pi}$ is handled such that $-\pi \lt i_{\text{angle},\text{residual}} \leq \pi$. @@ -1221,15 +1202,13 @@ tap_side control_side part of grid where voltage is to The control voltage is the voltage at the node, compensated with the voltage drop corresponding to the specified line drop compensation. -$$ - \begin{eqnarray} - & Z_{\text{compensation}} = r_{\text{compensation}} + \mathrm{j} x_{\text{compensation}} \\ - & U_{\text{control}} = \left|\underline{U}_{\text{node}} - \underline{I}_{\text{transformer,out}} - \cdot \underline{Z}_{\text{compensation}}\right| - = \left|\underline{U}_{\text{node}} + \underline{I}_{\text{transformer}} - \cdot \underline{Z}_{\text{compensation}}\right| - \end{eqnarray} -$$ +\begin{align*} + & Z_{\text{compensation}} = r_{\text{compensation}} + \mathrm{j} x_{\text{compensation}} \\ + & U_{\text{control}} = \left|\underline{U}_{\text{node}} - \underline{I}_{\text{transformer,out}} + \cdot \underline{Z}_{\text{compensation}}\right| + = \left|\underline{U}_{\text{node}} + \underline{I}_{\text{transformer}} + \cdot \underline{Z}_{\text{compensation}}\right| +\end{align*} where $\underline{U}_{\text{node}}$ and $\underline{I}_{\text{transformer}}$ are the calculated voltage and current phasors at the control side and may be obtained from a regular power flow calculation. @@ -1301,13 +1280,11 @@ A `voltage_regulator` has no short circuit output. The voltage regulator controls the generator to behave as a **PV node** in power flow calculations: -$$ - \begin{eqnarray} - & P_{\text{gen}} = P_{\text{specified}} \\ - & |U_{\text{node}}| = U_{\text{ref}} \\ - & Q_{\text{gen}} = \text{calculated to satisfy } U_{\text{ref}} - \end{eqnarray} -$$ +\begin{align*} + & P_{\text{gen}} = P_{\text{specified}} \\ + & |U_{\text{node}}| = U_{\text{ref}} \\ + & Q_{\text{gen}} = \text{calculated to satisfy } U_{\text{ref}} +\end{align*} When `q_min` and `q_max` are provided, the reactive power should be constrained: @@ -1318,13 +1295,11 @@ $$ When fully implemented, if the reactive power constraints are violated, the generator will operate at the limit and the node becomes a PQ node: -$$ - \begin{eqnarray} - & P_{\text{gen}} = P_{\text{specified}} \\ - & Q_{\text{gen}} = Q_{\text{min}} \text{ or } Q_{\text{max}} \\ - & |U_{\text{node}}| = \text{calculated from power flow} - \end{eqnarray} -$$ +\begin{align*} + & P_{\text{gen}} = P_{\text{specified}} \\ + & Q_{\text{gen}} = Q_{\text{min}} \text{ or } Q_{\text{max}} \\ + & |U_{\text{node}}| = \text{calculated from power flow} +\end{align*} In this case, `limit_violated` will indicate which limit was exceeded, and the actual voltage at the node may differ from `u_ref`.