Aller au contenu principal

Divergence theorem


Divergence theorem


In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region".

The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Green's theorem.

Explanation using liquid flow

Vector fields are often illustrated using the example of the velocity field of a fluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by a vector, so that the velocity of the liquid at any moment forms a vector field. Consider an imaginary closed surface S inside a body of liquid, enclosing a volume of liquid. The flux of liquid out of the volume at any time is equal to the volume rate of fluid crossing this surface, i.e., the surface integral of the velocity over the surface.

Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are no sources or sinks inside the volume then the flux of liquid out of S is zero. If the liquid is moving, it may flow into the volume at some points on the surface S and out of the volume at other points, but the amounts flowing in and out at any moment are equal, so the net flux of liquid out of the volume is zero.

However if a source of liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surface S. The flux outward through S equals the volume rate of flow of fluid into S from the pipe. Similarly if there is a sink or drain inside S, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surface S equals the rate of liquid removed by the sink.

If there are multiple sources and sinks of liquid inside S, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to the divergence of the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed by S equals the volume rate of flux through S. This is the divergence theorem.

The divergence theorem is employed in any conservation law which states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary.

Mathematical statement

Suppose V is a subset of R n {\displaystyle \mathbb {R} ^{n}} (in the case of n = 3, V represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary S (also indicated with V = S {\displaystyle \partial V=S} ). If F is a continuously differentiable vector field defined on a neighborhood of V, then:

V ( F ) d V = {\displaystyle \iiint \limits _{V}\left(\mathbf {\nabla } \cdot \mathbf {F} \right)\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} ( F n ^ ) d S . {\displaystyle (\mathbf {F} \cdot \mathbf {\hat {n}} )\,\mathrm {d} S.}

The left side is a volume integral over the volume V, and the right side is the surface integral over the boundary of the volume V. The closed, mesurable set V {\displaystyle \partial V} is oriented by outward-pointing normals, and n ^ {\displaystyle \mathbf {\hat {n}} } is the outward pointing unit normal at almost each point on the boundary V {\displaystyle \partial V} . ( d S {\displaystyle \mathrm {d} \mathbf {S} } may be used as a shorthand for n d S {\displaystyle \mathbf {n} \mathrm {d} S} .) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume V, and the right-hand side represents the total flow across the boundary S.

Informal derivation

The divergence theorem follows from the fact that if a volume V is partitioned into separate parts, the flux out of the original volume is equal to the sum of the flux out of each component volume. This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed.

See the diagram. A closed, bounded volume V is divided into two volumes V1 and V2 by a surface S3 (green). The flux Φ(Vi) out of each component region Vi is equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is

Φ ( V 1 ) + Φ ( V 2 ) = Φ 1 + Φ 31 + Φ 2 + Φ 32 {\displaystyle \Phi (V_{\text{1}})+\Phi (V_{\text{2}})=\Phi _{\text{1}}+\Phi _{\text{31}}+\Phi _{\text{2}}+\Phi _{\text{32}}}

where Φ1 and Φ2 are the flux out of surfaces S1 and S2, Φ31 is the flux through S3 out of volume 1, and Φ32 is the flux through S3 out of volume 2. The point is that surface S3 is part of the surface of both volumes. The "outward" direction of the normal vector n ^ {\displaystyle \mathbf {\hat {n}} } is opposite for each volume, so the flux out of one through S3 is equal to the negative of the flux out of the other

Φ 31 = S 3 F n ^ d S = S 3 F ( n ^ ) d S = Φ 32 {\displaystyle \Phi _{\text{31}}=\iint _{S_{3}}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=-\iint _{S_{3}}\mathbf {F} \cdot (-\mathbf {\hat {n}} )\;\mathrm {d} S=-\Phi _{\text{32}}}

so these two fluxes cancel in the sum. Therefore

Φ ( V 1 ) + Φ ( V 2 ) = Φ 1 + Φ 2 {\displaystyle \Phi (V_{\text{1}})+\Phi (V_{\text{2}})=\Phi _{\text{1}}+\Phi _{\text{2}}}

Since the union of surfaces S1 and S2 is S

Φ ( V 1 ) + Φ ( V 2 ) = Φ ( V ) {\displaystyle \Phi (V_{\text{1}})+\Phi (V_{\text{2}})=\Phi (V)}

This principle applies to a volume divided into any number of parts, as shown in the diagram. Since the integral over each internal partition (green surfaces) appears with opposite signs in the flux of the two adjacent volumes they cancel out, and the only contribution to the flux is the integral over the external surfaces (grey). Since the external surfaces of all the component volumes equal the original surface.

Φ ( V ) = V i V Φ ( V i ) {\displaystyle \Phi (V)=\sum _{V_{\text{i}}\subset V}\Phi (V_{\text{i}})}

The flux Φ out of each volume is the surface integral of the vector field F(x) over the surface

S ( V ) F n ^ d S = V i V S ( V i ) F n ^ d S {\displaystyle \iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\sum _{V_{\text{i}}\subset V}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S}

The goal is to divide the original volume into infinitely many infinitesimal volumes. As the volume is divided into smaller and smaller parts, the surface integral on the right, the flux out of each subvolume, approaches zero because the surface area S(Vi) approaches zero. However, from the definition of divergence, the ratio of flux to volume, Φ ( V i ) | V i | = 1 | V i | S ( V i ) F n ^ d S {\displaystyle {\frac {\Phi (V_{\text{i}})}{|V_{\text{i}}|}}={\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S} , the part in parentheses below, does not in general vanish but approaches the divergence div F as the volume approaches zero.

S ( V ) F n ^ d S = V i V ( 1 | V i | S ( V i ) F n ^ d S ) | V i | {\displaystyle \iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\sum _{V_{\text{i}}\subset V}\left({\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S\right)|V_{\text{i}}|}

As long as the vector field F(x) has continuous derivatives, the sum above holds even in the limit when the volume is divided into infinitely small increments

S ( V ) F n ^ d S = lim | V i | 0 V i V ( 1 | V i | S ( V i ) F n ^ d S ) | V i | {\displaystyle \iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\lim _{|V_{\text{i}}|\to 0}\sum _{V_{\text{i}}\subset V}\left({\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S\right)|V_{\text{i}}|}

As | V i | {\displaystyle |V_{\text{i}}|} approaches zero volume, it becomes the infinitesimal dV, the part in parentheses becomes the divergence, and the sum becomes a volume integral over V

Since this derivation is coordinate free, it shows that the divergence does not depend on the coordinates used.

Proofs

For bounded open subsets of Euclidean space

We are going to prove the following:

Proof of Theorem. (1) The first step is to reduce to the case where u C c 1 ( R n ) {\displaystyle u\in C_{c}^{1}(\mathbb {R} ^{n})} . Pick ϕ C c ( O ) {\displaystyle \phi \in C_{c}^{\infty }(O)} such that ϕ = 1 {\displaystyle \phi =1} on Ω ¯ {\displaystyle {\overline {\Omega }}} . Note that ϕ u C c 1 ( O ) C c 1 ( R n ) {\displaystyle \phi u\in C_{c}^{1}(O)\subset C_{c}^{1}(\mathbb {R} ^{n})} and ϕ u = u {\displaystyle \phi u=u} on Ω ¯ {\displaystyle {\overline {\Omega }}} . Hence it suffices to prove the theorem for ϕ u {\displaystyle \phi u} . Hence we may assume that u C c 1 ( R n ) {\displaystyle u\in C_{c}^{1}(\mathbb {R} ^{n})} .

(2) Let x 0 Ω {\displaystyle x_{0}\in \partial \Omega } be arbitrary. The assumption that Ω ¯ {\displaystyle {\overline {\Omega }}} has C 1 {\displaystyle C^{1}} boundary means that there is an open neighborhood U {\displaystyle U} of x 0 {\displaystyle x_{0}} in R n {\displaystyle \mathbb {R} ^{n}} such that Ω U {\displaystyle \partial \Omega \cap U} is the graph of a C 1 {\displaystyle C^{1}} function with Ω U {\displaystyle \Omega \cap U} lying on one side of this graph. More precisely, this means that after a translation and rotation of Ω {\displaystyle \Omega } , there are r > 0 {\displaystyle r>0} and h > 0 {\displaystyle h>0} and a C 1 {\displaystyle C^{1}} function g : R n 1 R {\displaystyle g:\mathbb {R} ^{n-1}\to \mathbb {R} } , such that with the notation x = ( x 1 , , x n 1 ) , {\displaystyle x'=(x_{1},\dots ,x_{n-1}),} it holds that U = { x R n : | x | < r  and  | x n g ( x ) | < h } {\displaystyle U=\{x\in \mathbb {R} ^{n}:|x'|<r{\text{ and }}|x_{n}-g(x')|<h\}} and for x U {\displaystyle x\in U} , x n = g ( x ) x Ω , h < x n g ( x ) < 0 x Ω , 0 < x n g ( x ) < h x Ω . {\displaystyle {\begin{aligned}x_{n}=g(x')&\implies x\in \partial \Omega ,\\-h<x_{n}-g(x')<0&\implies x\in \Omega ,\\0<x_{n}-g(x')<h&\implies x\notin \Omega .\\\end{aligned}}} Since Ω {\displaystyle \partial \Omega } is compact, we can cover Ω {\displaystyle \partial \Omega } with finitely many neighborhoods U 1 , , U N {\displaystyle U_{1},\dots ,U_{N}} of the above form. Note that { Ω , U 1 , , U N } {\displaystyle \{\Omega ,U_{1},\dots ,U_{N}\}} is an open cover of Ω ¯ = Ω Ω {\displaystyle {\overline {\Omega }}=\Omega \cup \partial \Omega } . By using a C {\displaystyle C^{\infty }} partition of unity subordinate to this cover, it suffices to prove the theorem in the case where either u {\displaystyle u} has compact support in Ω {\displaystyle \Omega } or u {\displaystyle u} has compact support in some U j {\displaystyle U_{j}} . If u {\displaystyle u} has compact support in Ω {\displaystyle \Omega } , then for all i { 1 , , n } {\displaystyle i\in \{1,\dots ,n\}} , Ω u x i d V = R n u x i d V = R n 1 u x i ( x ) d x i d x = 0 {\displaystyle \int _{\Omega }u_{x_{i}}\,dV=\int _{\mathbb {R} ^{n}}u_{x_{i}}\,dV=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{\infty }u_{x_{i}}(x)\,dx_{i}\,dx'=0} by the fundamental theorem of calculus, and Ω u ν i d S = 0 {\displaystyle \int _{\partial \Omega }u\nu _{i}\,dS=0} since u {\displaystyle u} vanishes on a neighborhood of Ω {\displaystyle \partial \Omega } . Thus the theorem holds for u {\displaystyle u} with compact support in Ω {\displaystyle \Omega } . Thus we have reduced to the case where u {\displaystyle u} has compact support in some U j {\displaystyle U_{j}} .

(3) So assume u {\displaystyle u} has compact support in some U j {\displaystyle U_{j}} . The last step now is to show that the theorem is true by direct computation. Change notation to U = U j {\displaystyle U=U_{j}} , and bring in the notation from (2) used to describe U {\displaystyle U} . Note that this means that we have rotated and translated Ω {\displaystyle \Omega } . This is a valid reduction since the theorem is invariant under rotations and translations of coordinates. Since u ( x ) = 0 {\displaystyle u(x)=0} for | x | r {\displaystyle |x'|\geq r} and for | x n g ( x ) | h {\displaystyle |x_{n}-g(x')|\geq h} , we have for each i { 1 , , n } {\displaystyle i\in \{1,\dots ,n\}} that Ω u x i d V = | x | < r g ( x ) h g ( x ) u x i ( x , x n ) d x n d x = R n 1 g ( x ) u x i ( x , x n ) d x n d x . {\displaystyle {\begin{aligned}\int _{\Omega }u_{x_{i}}\,dV&=\int _{|x'|<r}\int _{g(x')-h}^{g(x')}u_{x_{i}}(x',x_{n})\,dx_{n}\,dx'\\&=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}u_{x_{i}}(x',x_{n})\,dx_{n}\,dx'.\end{aligned}}} For i = n {\displaystyle i=n} we have by the fundamental theorem of calculus that R n 1 g ( x ) u x n ( x , x n ) d x n d x = R n 1 u ( x , g ( x ) ) d x . {\displaystyle \int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}u_{x_{n}}(x',x_{n})\,dx_{n}\,dx'=\int _{\mathbb {R} ^{n-1}}u(x',g(x'))\,dx'.} Now fix i { 1 , , n 1 } {\displaystyle i\in \{1,\dots ,n-1\}} . Note that R n 1 g ( x ) u x i ( x , x n ) d x n d x = R n 1 0 u x i ( x , g ( x ) + s ) d s d x {\displaystyle \int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}u_{x_{i}}(x',x_{n})\,dx_{n}\,dx'=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}u_{x_{i}}(x',g(x')+s)\,ds\,dx'} Define v : R n R {\displaystyle v:\mathbb {R} ^{n}\to \mathbb {R} } by v ( x , s ) = u ( x , g ( x ) + s ) {\displaystyle v(x',s)=u(x',g(x')+s)} . By the chain rule, v x i ( x , s ) = u x i ( x , g ( x ) + s ) + u x n ( x , g ( x ) + s ) g x i ( x ) . {\displaystyle v_{x_{i}}(x',s)=u_{x_{i}}(x',g(x')+s)+u_{x_{n}}(x',g(x')+s)g_{x_{i}}(x').} But since v {\displaystyle v} has compact support, we can integrate out d x i {\displaystyle dx_{i}} first to deduce that R n 1 0 v x i ( x , s ) d s d x = 0. {\displaystyle \int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}v_{x_{i}}(x',s)\,ds\,dx'=0.} Thus R n 1 0 u x i ( x , g ( x ) + s ) d s d x = R n 1 0 u x n ( x , g ( x ) + s ) g x i ( x ) d s d x = R n 1 u ( x , g ( x ) ) g x i ( x ) d x . {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}u_{x_{i}}(x',g(x')+s)\,ds\,dx'&=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}-u_{x_{n}}(x',g(x')+s)g_{x_{i}}(x')\,ds\,dx'\\&=\int _{\mathbb {R} ^{n-1}}-u(x',g(x'))g_{x_{i}}(x')\,dx'.\end{aligned}}} In summary, with u = ( u x 1 , , u x n ) {\displaystyle \nabla u=(u_{x_{1}},\dots ,u_{x_{n}})} we have Ω u d V = R n 1 g ( x ) u d V = R n 1 u ( x , g ( x ) ) ( g ( x ) , 1 ) d x . {\displaystyle \int _{\Omega }\nabla u\,dV=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}\nabla u\,dV=\int _{\mathbb {R} ^{n-1}}u(x',g(x'))(-\nabla g(x'),1)\,dx'.} Recall that the outward unit normal to the graph Γ {\displaystyle \Gamma } of g {\displaystyle g} at a point ( x , g ( x ) ) Γ {\displaystyle (x',g(x'))\in \Gamma } is ν ( x , g ( x ) ) = 1 1 + | g ( x ) | 2 ( g ( x ) , 1 ) {\displaystyle \nu (x',g(x'))={\frac {1}{\sqrt {1+|\nabla g(x')|^{2}}}}(-\nabla g(x'),1)} and that the surface element d S {\displaystyle dS} is given by d S = 1 + | g ( x ) | 2 d x {\displaystyle dS={\sqrt {1+|\nabla g(x')|^{2}}}\,dx'} . Thus Ω u d V = Ω u ν d S . {\displaystyle \int _{\Omega }\nabla u\,dV=\int _{\partial \Omega }u\nu \,dS.} This completes the proof.

For compact Riemannian manifolds with boundary

We are going to prove the following:

Proof of Theorem. We use the Einstein summation convention. By using a partition of unity, we may assume that u {\displaystyle u} and X {\displaystyle X} have compact support in a coordinate patch O Ω ¯ {\displaystyle O\subset {\overline {\Omega }}} . First consider the case where the patch is disjoint from Ω {\displaystyle \partial \Omega } . Then O {\displaystyle O} is identified with an open subset of R n {\displaystyle \mathbb {R} ^{n}} and integration by parts produces no boundary terms: ( grad u , X ) = O grad u , X g d x = O j u X j g d x = O u j ( g X j ) d x = O u 1 g j ( g X j ) g d x = ( u , 1 g j ( g X j ) ) = ( u , div X ) . {\displaystyle {\begin{aligned}(\operatorname {grad} u,X)&=\int _{O}\langle \operatorname {grad} u,X\rangle {\sqrt {g}}\,dx\\&=\int _{O}\partial _{j}uX^{j}{\sqrt {g}}\,dx\\&=-\int _{O}u\partial _{j}({\sqrt {g}}X^{j})\,dx\\&=-\int _{O}u{\frac {1}{\sqrt {g}}}\partial _{j}({\sqrt {g}}X^{j}){\sqrt {g}}\,dx\\&=(u,-{\frac {1}{\sqrt {g}}}\partial _{j}({\sqrt {g}}X^{j}))\\&=(u,-\operatorname {div} X).\end{aligned}}} In the last equality we used the Voss-Weyl coordinate formula for the divergence, although the preceding identity could be used to define div {\displaystyle -\operatorname {div} } as the formal adjoint of grad {\displaystyle \operatorname {grad} } . Now suppose O {\displaystyle O} intersects Ω {\displaystyle \partial \Omega } . Then O {\displaystyle O} is identified with an open set in R + n = { x R n : x n 0 } {\displaystyle \mathbb {R} _{+}^{n}=\{x\in \mathbb {R} ^{n}:x_{n}\geq 0\}} . We zero extend u {\displaystyle u} and X {\displaystyle X} to R + n {\displaystyle \mathbb {R} _{+}^{n}} and perform integration by parts to obtain ( grad u , X ) = O grad u , X g d x = R + n j u X j g d x = ( u , div X ) R n 1 u ( x , 0 ) X n ( x , 0 ) g ( x , 0 ) d x , {\displaystyle {\begin{aligned}(\operatorname {grad} u,X)&=\int _{O}\langle \operatorname {grad} u,X\rangle {\sqrt {g}}\,dx\\&=\int _{\mathbb {R} _{+}^{n}}\partial _{j}uX^{j}{\sqrt {g}}\,dx\\&=(u,-\operatorname {div} X)-\int _{\mathbb {R} ^{n-1}}u(x',0)X^{n}(x',0){\sqrt {g(x',0)}}\,dx',\end{aligned}}} where d x = d x 1 d x n 1 {\displaystyle dx'=dx_{1}\dots dx_{n-1}} . By a variant of the straightening theorem for vector fields, we may choose O {\displaystyle O} so that x n {\displaystyle {\frac {\partial }{\partial x_{n}}}} is the inward unit normal N {\displaystyle -N} at Ω {\displaystyle \partial \Omega } . In this case g ( x , 0 ) d x = g Ω ( x ) d x = d S {\displaystyle {\sqrt {g(x',0)}}\,dx'={\sqrt {g_{\partial \Omega }(x')}}\,dx'=dS} is the volume element on Ω {\displaystyle \partial \Omega } and the above formula reads ( grad u , X ) = ( u , div X ) + Ω u X , N d S . {\displaystyle (\operatorname {grad} u,X)=(u,-\operatorname {div} X)+\int _{\partial \Omega }u\langle X,N\rangle \,dS.} This completes the proof.

Corollaries

By replacing F in the divergence theorem with specific forms, other useful identities can be derived (cf. vector identities).

  • With F F g {\displaystyle \mathbf {F} \rightarrow \mathbf {F} g} for a scalar function g and a vector field F,
V [ F ( g ) + g ( F ) ] d V = {\displaystyle \iiint _{V}\left[\mathbf {F} \cdot \left(\nabla g\right)+g\left(\nabla \cdot \mathbf {F} \right)\right]\mathrm {d} V=} S {\displaystyle \scriptstyle S} g F n d S . {\displaystyle g\mathbf {F} \cdot \mathbf {n} \mathrm {d} S.}
A special case of this is F = f {\displaystyle \mathbf {F} =\nabla f} , in which case the theorem is the basis for Green's identities.
  • With F F × G {\displaystyle \mathbf {F} \rightarrow \mathbf {F} \times \mathbf {G} } for two vector fields F and G, where × {\displaystyle \times } denotes a cross product,
V ( F × G ) d V = V [ G ( × F ) F ( × G ) ] d V = {\displaystyle \iiint _{V}\nabla \cdot \left(\mathbf {F} \times \mathbf {G} \right)\mathrm {d} V=\iiint _{V}\left[\mathbf {G} \cdot \left(\nabla \times \mathbf {F} \right)-\mathbf {F} \cdot \left(\nabla \times \mathbf {G} \right)\right]\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} ( F × G ) n d S . {\displaystyle (\mathbf {F} \times \mathbf {G} )\cdot \mathbf {n} \mathrm {d} S.}
  • With F F G {\displaystyle \mathbf {F} \rightarrow \mathbf {F} \cdot \mathbf {G} } for two vector fields F and G, where {\displaystyle \cdot } denotes a dot product,
V ( F G ) d V = V [ ( G ) F + ( F ) G ] d V = {\displaystyle \iiint _{V}\nabla \left(\mathbf {F} \cdot \mathbf {G} \right)\mathrm {d} V=\iiint _{V}\left[\left(\nabla \mathbf {G} \right)\cdot \mathbf {F} +\left(\nabla \mathbf {F} \right)\cdot \mathbf {G} \right]\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} ( F G ) n d S . {\displaystyle (\mathbf {F} \cdot \mathbf {G} )\mathbf {n} \mathrm {d} S.}
  • With F f c {\displaystyle \mathbf {F} \rightarrow f\mathbf {c} } for a scalar function f and vector field c:
V c f d V = {\displaystyle \iiint _{V}\mathbf {c} \cdot \nabla f\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} ( c f ) n d S V f ( c ) d V . {\displaystyle (\mathbf {c} f)\cdot \mathbf {n} \mathrm {d} S-\iiint _{V}f(\nabla \cdot \mathbf {c} )\,\mathrm {d} V.}
The last term on the right vanishes for constant c {\displaystyle \mathbf {c} } or any divergence free (solenoidal) vector field, e.g. Incompressible flows without sources or sinks such as phase change or chemical reactions etc. In particular, taking c {\displaystyle \mathbf {c} } to be constant:
V f d V = {\displaystyle \iiint _{V}\nabla f\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} f n d S . {\displaystyle f\mathbf {n} \mathrm {d} S.}
  • With F c × F {\displaystyle \mathbf {F} \rightarrow \mathbf {c} \times \mathbf {F} } for vector field F and constant vector c:
V c ( × F ) d V = {\displaystyle \iiint _{V}\mathbf {c} \cdot (\nabla \times \mathbf {F} )\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} ( F × c ) n d S . {\displaystyle (\mathbf {F} \times \mathbf {c} )\cdot \mathbf {n} \mathrm {d} S.}
By reordering the triple product on the right hand side and taking out the constant vector of the integral,
V ( × F ) d V c = {\displaystyle \iiint _{V}(\nabla \times \mathbf {F} )\,\mathrm {d} V\cdot \mathbf {c} =} S {\displaystyle \scriptstyle S} ( d S × F ) c . {\displaystyle (\mathrm {d} \mathbf {S} \times \mathbf {F} )\cdot \mathbf {c} .}
Hence,
V ( × F ) d V = {\displaystyle \iiint _{V}(\nabla \times \mathbf {F} )\,\mathrm {d} V=} S {\displaystyle \scriptstyle S} n × F d S . {\displaystyle \mathbf {n} \times \mathbf {F} \mathrm {d} S.}

Example

Suppose we wish to evaluate

S {\displaystyle \scriptstyle S} F n d S , {\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S,}

where S is the unit sphere defined by

S = { ( x , y , z ) R 3   :   x 2 + y 2 + z 2 = 1 } , {\displaystyle S=\left\{(x,y,z)\in \mathbb {R} ^{3}\ :\ x^{2}+y^{2}+z^{2}=1\right\},}

and F is the vector field

F = 2 x i + y 2 j + z 2 k . {\displaystyle \mathbf {F} =2x\mathbf {i} +y^{2}\mathbf {j} +z^{2}\mathbf {k} .}

The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem, because the divergence theorem says that the integral is equal to:

W ( F ) d V = 2 W ( 1 + y + z ) d V = 2 W d V + 2 W y d V + 2 W z d V , {\displaystyle \iiint _{W}(\nabla \cdot \mathbf {F} )\,\mathrm {d} V=2\iiint _{W}(1+y+z)\,\mathrm {d} V=2\iiint _{W}\mathrm {d} V+2\iiint _{W}y\,\mathrm {d} V+2\iiint _{W}z\,\mathrm {d} V,}

where W is the unit ball:

W = { ( x , y , z ) R 3   :   x 2 + y 2 + z 2 1 } . {\displaystyle W=\left\{(x,y,z)\in \mathbb {R} ^{3}\ :\ x^{2}+y^{2}+z^{2}\leq 1\right\}.}

Since the function y is positive in one hemisphere of W and negative in the other, in an equal and opposite way, its total integral over W is zero. The same is true for z:

W y d V = W z d V = 0. {\displaystyle \iiint _{W}y\,\mathrm {d} V=\iiint _{W}z\,\mathrm {d} V=0.}

Therefore,

S {\displaystyle \scriptstyle S} F n d S = 2 W d V = 8 π 3 , {\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S=2\iiint _{W}\,dV={\frac {8\pi }{3}},}

because the unit ball W has volume 4π/3.

Applications

Differential and integral forms of physical laws

As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples are Gauss's law (in electrostatics), Gauss's law for magnetism, and Gauss's law for gravity.

Continuity equations

Continuity equations offer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. In fluid dynamics, electromagnetism, quantum mechanics, relativity theory, and a number of other fields, there are continuity equations that describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution of sources or sinks of that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux).

Inverse-square laws

Any inverse-square law can instead be written in a Gauss's law-type form (with a differential and integral form, as described above). Two examples are Gauss's law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss's law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details.

History

Joseph-Louis Lagrange introduced the notion of surface integrals in 1760 and again in more general terms in 1811, in the second edition of his Mécanique Analytique. Lagrange employed surface integrals in his work on fluid mechanics. He discovered the divergence theorem in 1762.

Carl Friedrich Gauss was also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem. He proved additional special cases in 1833 and 1839. But it was Mikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow. Special cases were proven by George Green in 1828 in An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Siméon Denis Poisson in 1824 in a paper on elasticity, and Frédéric Sarrus in 1828 in his work on floating bodies.

Worked examples

Example 1

To verify the planar variant of the divergence theorem for a region R {\displaystyle R} :

R = { ( x , y ) R 2   :   x 2 + y 2 1 } , {\displaystyle R=\left\{(x,y)\in \mathbb {R} ^{2}\ :\ x^{2}+y^{2}\leq 1\right\},}

and the vector field:

F ( x , y ) = 2 y i + 5 x j . {\displaystyle \mathbf {F} (x,y)=2y\mathbf {i} +5x\mathbf {j} .}

The boundary of R {\displaystyle R} is the unit circle, C {\displaystyle C} , that can be represented parametrically by:

x = cos ( s ) , y = sin ( s ) {\displaystyle x=\cos(s),\quad y=\sin(s)}

such that 0 s 2 π {\displaystyle 0\leq s\leq 2\pi } where s {\displaystyle s} units is the length arc from the point s = 0 {\displaystyle s=0} to the point P {\displaystyle P} on C {\displaystyle C} . Then a vector equation of C {\displaystyle C} is

C ( s ) = cos ( s ) i + sin ( s ) j . {\displaystyle C(s)=\cos(s)\mathbf {i} +\sin(s)\mathbf {j} .}

At a point P {\displaystyle P} on C {\displaystyle C} :

P = ( cos ( s ) , sin ( s ) ) F = 2 sin ( s ) i + 5 cos ( s ) j . {\displaystyle P=(\cos(s),\,\sin(s))\,\Rightarrow \,\mathbf {F} =2\sin(s)\mathbf {i} +5\cos(s)\mathbf {j} .}

Therefore,

C F n d s = 0 2 π ( 2 sin ( s ) i + 5 cos ( s ) j ) ( cos ( s ) i + sin ( s ) j ) d s = 0 2 π ( 2 sin ( s ) cos ( s ) + 5 sin ( s ) cos ( s ) ) d s = 7 0 2 π sin ( s ) cos ( s ) d s = 0. {\displaystyle {\begin{aligned}\oint _{C}\mathbf {F} \cdot \mathbf {n} \,\mathrm {d} s&=\int _{0}^{2\pi }(2\sin(s)\mathbf {i} +5\cos(s)\mathbf {j} )\cdot (\cos(s)\mathbf {i} +\sin(s)\mathbf {j} )\,\mathrm {d} s\\&=\int _{0}^{2\pi }(2\sin(s)\cos(s)+5\sin(s)\cos(s))\,\mathrm {d} s\\&=7\int _{0}^{2\pi }\sin(s)\cos(s)\,\mathrm {d} s\\&=0.\end{aligned}}}

Because M = R e ( F ) = 2 y {\displaystyle M={\mathfrak {Re}}(\mathbf {F} )=2y} , we can evaluate M x = 0 {\displaystyle {\frac {\partial M}{\partial x}}=0} , and because N = I m ( F ) = 5 x {\displaystyle N={\mathfrak {Im}}(\mathbf {F} )=5x} , N y = 0 {\displaystyle {\frac {\partial N}{\partial y}}=0} . Thus

R F d A = R ( M x + N y ) d A = 0. {\displaystyle \iint _{R}\,\mathbf {\nabla } \cdot \mathbf {F} \,\mathrm {d} A=\iint _{R}\left({\frac {\partial M}{\partial x}}+{\frac {\partial N}{\partial y}}\right)\,\mathrm {d} A=0.}

Example 2

Let's say we wanted to evaluate the flux of the following vector field defined by F = 2 x 2 i + 2 y 2 j + 2 z 2 k {\displaystyle \mathbf {F} =2x^{2}{\textbf {i}}+2y^{2}{\textbf {j}}+2z^{2}{\textbf {k}}} bounded by the following inequalities:

{ 0 x 3 } , { 2 y 2 } , { 0 z 2 π } {\displaystyle \left\{0\leq x\leq 3\right\},\left\{-2\leq y\leq 2\right\},\left\{0\leq z\leq 2\pi \right\}}

By the divergence theorem,

V ( F ) d V = {\displaystyle \iiint _{V}\left(\mathbf {\nabla } \cdot \mathbf {F} \right)\mathrm {d} V=} S {\displaystyle \scriptstyle S} ( F n ) d S . {\displaystyle (\mathbf {F} \cdot \mathbf {n} )\,\mathrm {d} S.}

We now need to determine the divergence of F {\displaystyle {\textbf {F}}} . If F {\displaystyle \mathbf {F} } is a three-dimensional vector field, then the divergence of F {\displaystyle {\textbf {F}}} is given by F = ( x i + y j + z k ) F {\textstyle \nabla \cdot {\textbf {F}}=\left({\frac {\partial }{\partial x}}{\textbf {i}}+{\frac {\partial }{\partial y}}{\textbf {j}}+{\frac {\partial }{\partial z}}{\textbf {k}}\right)\cdot {\textbf {F}}} .

Thus, we can set up the following flux integral I = {\displaystyle I=} S {\displaystyle {\scriptstyle S}} F n d S , {\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S,} as follows:

I = V F d V = V ( F x x + F y y + F z z ) d V = V ( 4 x + 4 y + 4 z ) d V = 0 3 2 2 0 2 π ( 4 x + 4 y + 4 z ) d V {\displaystyle {\begin{aligned}I&=\iiint _{V}\nabla \cdot \mathbf {F} \,\mathrm {d} V\\[6pt]&=\iiint _{V}\left({\frac {\partial \mathbf {F_{x}} }{\partial x}}+{\frac {\partial \mathbf {F_{y}} }{\partial y}}+{\frac {\partial \mathbf {F_{z}} }{\partial z}}\right)\mathrm {d} V\\[6pt]&=\iiint _{V}(4x+4y+4z)\,\mathrm {d} V\\[6pt]&=\int _{0}^{3}\int _{-2}^{2}\int _{0}^{2\pi }(4x+4y+4z)\,\mathrm {d} V\end{aligned}}}

Now that we have set up the integral, we can evaluate it.

0 3 2 2 0 2 π ( 4 x + 4 y + 4 z ) d V = 2 2 0 2 π ( 12 y + 12 z + 18 ) d y d z = 0 2 π 24 ( 2 z + 3 ) d z = 48 π ( 2 π + 3 ) {\displaystyle {\begin{aligned}\int _{0}^{3}\int _{-2}^{2}\int _{0}^{2\pi }(4x+4y+4z)\,\mathrm {d} V&=\int _{-2}^{2}\int _{0}^{2\pi }(12y+12z+18)\,\mathrm {d} y\,\mathrm {d} z\\[6pt]&=\int _{0}^{2\pi }24(2z+3)\,\mathrm {d} z\\[6pt]&=48\pi (2\pi +3)\end{aligned}}}

Generalizations

Multiple dimensions

One can use the generalised Stokes' theorem to equate the n-dimensional volume integral of the divergence of a vector field F over a region U to the (n − 1)-dimensional surface integral of F over the boundary of U:

U n F d V = U n 1 F n d S {\displaystyle \underbrace {\int \cdots \int _{U}} _{n}\nabla \cdot \mathbf {F} \,\mathrm {d} V=\underbrace {\oint _{}\cdots \oint _{\partial U}} _{n-1}\mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S}

This equation is also known as the divergence theorem.

When n = 2, this is equivalent to Green's theorem.

When n = 1, it reduces to the fundamental theorem of calculus, part 2.

Tensor fields

Writing the theorem in Einstein notation:

V F i x i d V = {\displaystyle \iiint _{V}{\dfrac {\partial \mathbf {F} _{i}}{\partial x_{i}}}\mathrm {d} V=} S {\displaystyle \scriptstyle S} F i n i d S {\displaystyle \mathbf {F} _{i}n_{i}\,\mathrm {d} S}

suggestively, replacing the vector field F with a rank-n tensor field T, this can be generalized to:

V T i 1 i 2 i q i n x i q d V = {\displaystyle \iiint _{V}{\dfrac {\partial T_{i_{1}i_{2}\cdots i_{q}\cdots i_{n}}}{\partial x_{i_{q}}}}\mathrm {d} V=} S {\displaystyle \scriptstyle S} T i 1 i 2 i q i n n i q d S . {\displaystyle T_{i_{1}i_{2}\cdots i_{q}\cdots i_{n}}n_{i_{q}}\,\mathrm {d} S.}

where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity).

See also

  • Kelvin–Stokes theorem

References

External links

  • "Ostrogradski formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
  • Differential Operators and the Divergence Theorem at MathPages
  • The Divergence (Gauss) Theorem by Nick Bykov, Wolfram Demonstrations Project.
  • Weisstein, Eric W. "Divergence Theorem". MathWorld. – This article was originally based on the GFDL article from PlanetMath at https://web.archive.org/web/20021029094728/http://planetmath.org/encyclopedia/Divergence.html

Text submitted to CC-BY-SA license. Source: Divergence theorem by Wikipedia (Historical)