Revision as of 20:09, 4 December 2011 editQuietbritishjim (talk | contribs)Extended confirmed users1,602 edits →No referance← Previous edit | Revision as of 19:14, 25 May 2012 edit undoF=q(E+v^B) (talk | contribs)4,289 edits →No referance: hrm ?...Next edit → | ||
Line 515: | Line 515: | ||
:::: That is a good point. But a simple change from "''k'' times differentiable at the point '''a'''" to "''k'' times continuously differentiable at the point '''''a'''''" fixes that. ] (]) 20:09, 4 December 2011 (UTC) | :::: That is a good point. But a simple change from "''k'' times differentiable at the point '''a'''" to "''k'' times continuously differentiable at the point '''''a'''''" fixes that. ] (]) 20:09, 4 December 2011 (UTC) | ||
This is quite old now, but anyway... I disagree with ] above when he mentioned: | |||
:''"The source you provided apparently even lacks a clear statement of the theorem. Moreover, what it does say is "Taylor's theorem" is clearly wrong for ''k''-times differentiable functions, or even smooth functions. This is not a good reference for the article. "'' | |||
I have that book (Riley et al, 2010), and have checked the chapter (5) containing the theorem inside-out. Here is the theorem and surrounding text - straight from the textbook (§ 5.7 p. 161-162): | |||
____________________ | |||
:"... The most general form of Taylor's theorem, for a function ''f''(''x''<sub>1</sub>, ''x''<sub>2</sub>, ... ''x<sub>n</sub>'') of ''n'' variables, is a simple extension of the above . Although it is not necessary to do so, we may think of the ''x<sub>i</sub>'' coordinates in ''n''-dimensional space and write the function as ''f''('''x'''), where '''x''' is a vector from the origin to (''x''<sub>1</sub>, ''x''<sub>2</sub>, ... ''x<sub>n</sub>''). Taylor's theorem then becomes: | |||
::<math>f(\mathbf{x}) = f(\mathbf{x}_0) + \sum_i \dfrac{\partial f}{\partial x_i}\Delta x_i + \dfrac{1}{2!}\sum_i\sum_j \dfrac{\partial^2 f}{\partial x_i \partial x_j}\Delta x_i\Delta x_j \cdots </math> | |||
:where <math>\Delta x_i = x_i - x_{i_0} </math> are evaluated at <math>\left(x_{1_0}, x_{2_0}, \cdots x_{n_0}\right) </math>. For completeness, we note that in this case the full Taylor series can be written in the form: | |||
::<math>f(\mathbf{x}) = \sum_{n=0}^\infty \dfrac{1}{n!} \left_{\mathbf{x}=\mathbf{x}_0}</math> | |||
:where <math>\nabla</math> is the vector differential operator del, to be discussed in chapter 10." | |||
____________________ | |||
so yes - it does provide a clear statement of the theorem (is this not Taylor's theorem? no?) in a way that can be used in practice, and no - it doesn't actually say "Taylor's theorem is clearly wrong for ''k''-times differentiable functions, or even smooth functions". That book is full-on, 2-inches thick, at the undergraduate level (1st year through 4th). I simply fail to see why it's "unreliable"... | |||
It’s quite hypocritical that you have a ] and yet you are unable to find a reference for the theorem yourself (else it would be done it by now - right?)... Given the importance of the theorem, it should be ''easy'' to find a reference for at least one "clear" statement of the theorem in any form (even if it were not identical to the version in the article) in a collection of famous treatises, am I wrong? | |||
In any case I am "clearly" wrong somewhere? So could anyone please point out my misunderstandings (if anyone has access to the book that would help, obviously not essential though). Thank you. =) | |||
On a related note - I strongly agree with ]'s statement: | |||
:''"To state the obvious: if no sources state the theorem in exactly the same way as Misplaced Pages currently does, maybe Misplaced Pages shouldn't state it in that form?"'' | |||
I've seen many things reverted and deleted from lack of citations. Why are we including a form of the theorem which is not found easily in the literature, and leaving it to sit around with a "citation needed" template if it really is so hard to find a reference? The section should just be rewritten, and it hasn't even happened, the situation just doesn't add up... | |||
By no means will I add this citation, just adding my view... <span style="font-family:'TW Cen MT';">] ] ]</span> 19:14, 25 May 2012 (UTC) |
Revision as of 19:14, 25 May 2012
Mathematics B‑class High‑priority | ||||||||||
|
Proof
Three expressions for R are available. Two are shown below
- It is a bit odd to say nothing about the 3rd. - Patrick 00:37 Mar 24, 2003 (UTC)
There ought to be a proof of this theorem.
Note that f(x) = e^(-1/x^2) (with f(0) = 0) is m times continously differentiable everywhere for any m > 0, since derivatives are made up from terms like x^(-n) e^(-1/x^2) all having 0 as limit in 0. So all the derivatives vanish at 0. The Taylor theorem holds, but an infinite expension similar to say e^x does not exist around 0. This could be worth mentioning. (A classic example for non convergence of the Taylor series. The sequence of functions made up from the Taylor expansions without the remainder term does not always converge to the function.)
70.113.52.79 18:04, 14 November 2006 (UTC)
Actually, the entry 'Taylor series' has a complete explanation. A link to that entry should suffice.
Sustik 20:47, 14 November 2006 (UTC)
Removed text
I removed the following text from the article, which was added on Feb 5, 2003 by an anonymous contributor.
- Proof:
- Assume that : is a function that can be expressed in terms of a polynomial (it does not have to appear to be one). The n-th derivative of that function will have a constant term as well as other terms. The "zeroth derivative" of the function (plain old :) has what we will call the "zeroth term" (term with the zeroth power of x) as its constant term. The first derivative will have as a constant the coefficient of the first term times the power of the first term, namely, 1. The second derivative will have as a constant the coefficient of the second term times the power of the second term times the power of the first term: coefficient * 2 * 1. The next will be: coefficient * 3 * 2 * 1. The general pattern is that the n-th derivative's constant term is equal to the n-th term's coefficient times n factorial. Since a polynomial, and by extension, one of its derivatives, equals its constant term at x=0, we can say:
- So we now have a formula for determining the coefficient of any term for the polynomial version of :. If you put these together, you get a polynomial approximation for the function.
I am not sure what this is supposed to prove, but it appears to be meant as a proof of Taylor's theorem. In that case, it does not seem quite right to me; in particular, the assumption in the first sentence ("f is a function that can be expressed in terms of a polynomial") is rather vague and appears to be just what needs to be proven. Hence, I took the liberty of replacing the above text with a new proof. -- Jitse Niesen 12:50, 20 Feb 2004 (UTC)
- Jitse, your proof gives no basis to the fact that the remainder term gets smaller as n increases, nor any idea of the interval of convergence of the series. Therefore I will provide a different proof, taken from Complex Variables and Applications by Ruel V. Churchill of the University of Michigan. It involves complex analysis, however, and if you can provide a basis for the convergence of the series in your proof, I will put it back. Scythe33 22:03, 19 September 2005 (UTC)
- The theorem, as formulated in the article, does not claim that the remainder term gets smaller as n increases; in fact, this only happens if the function is analytic. The articles on power series talks about the interval of convergence, and holomorphic functions are analytic proves that the series converges.
- In my opinion, the convergence of a Taylor series is an important point, which could be explained more fully in the article (as the overall organization should be improved), and you're very welcome to do so. Proving that the series converges does not seem necessary for the reasons I gave in the previous paragraph. -- Jitse Niesen (talk) 13:33, 27 September 2005 (UTC)
I appreciate the proof of Taylor's theorem in one variable, it is very good. The explanation is short and clear. Could we site the original source on the main page? ("Complex Variables and Applications" by Ruel V. Churchill) Yoderj 20:17, 8 February 2006 (UTC)
- The proof was not taken from that book. It is a completely standard proof which will be in many text books. -- Jitse Niesen (talk) 20:48, 8 February 2006 (UTC)
What is ξ?
In the Lagrange form of the remainder term, is ξ meant to be any number between a and x or is the theorem supposed to state that there exists such a ξ? I would guess the latter (because the proof uses the Mean Value Theorem), but the article doesn't make it totally clear. Eric119 15:51, 23 Sep 2004 (UTC)
- Thanks for picking this up. I rewrote that part to clarify (hopefully). -- Jitse Niesen 16:00, 24 Sep 2004 (UTC)
A few suggestions
I have a few, more stylistic, concerns for the article. I think it should be noted that, to state the Cauchy form of the remainder term, the n+1 derivative of f (the polynomial in the hypothesis of the theorem) must be integrable. I have a similar concern for the proof of the theorem in one variable; if proving the integral version of the theorem, in the inductive step, we must assume that the theorem holds true for a function whose first n derivatives are continuous, and whose n+1 derivative is integrable (this is the "n" case). Then to prove the theorem holds true for n+1, we assume that a new function f has n+1 derivatives -- all of which are continuous -- and that its n+2 derivative is integrable. Since f's n+1 derivative is continuous, it is integrable, and then we may apply the inductive hypothesis to write an expression for f, with the remainder term written as an integral of the n+1 derivative. Then using integration by parts, we can make a substitution to complete the induction. T.Tyrrell 05:13, 3 May 2006 (UTC)
Vector Exponents
What exactly is meant by in the multi-variable case? I didn't think you could take powers of vectors. Maybe I just don't understand the notation. At the very least it seems a little ambiguous.
- This is multi-index notation. I moved the link to this article closer to the formula; hopefully it's clearer now. -- Jitse Niesen (talk) 13:26, 5 October 2006 (UTC)
Lagrange error bound
I noticed that "Lagrange error bound" is redirected here but is not specificly mentioned. I suggest that someone make the relation somewhere.—Preceding unsigned comment added by 132.170.52.13 (talk • contribs)
minor point
small point (and my experience is limited) but in the proof when xf'(x) is expanded I think it is really expanding x(f'(x)) so I think x should stay outside the integral symbol. So instead of what's given (see second term of second line):
maybe this would be clearer:
Phillipshowardhamilton 18:49, 4 February 2007 (UTC)
About the illustration
There seems to be a slight problem with the position of taylor polynomial and exponential : for x>0, we have exp(x)>P_n(x), and the graphs show the opposite —The preceding unsigned comment was added by 86.201.165.111 (talk) 13:10, 14 April 2007 (UTC).
- Thanks for pointing out the problem. I asked its author, Enochlau (talk · contribs), to fix it and commented it out until he does so. JRSpriggs 08:28, 15 April 2007 (UTC)
- Fixed. enochlau (talk) 10:39, 15 April 2007 (UTC)
- Thank you for your quick response. JRSpriggs 11:36, 15 April 2007 (UTC)
- Fixed. enochlau (talk) 10:39, 15 April 2007 (UTC)
Form of the Remainder
The Cauchy remainder is in the form R_n(x) = f^(n+1)(%xi) over {n!} (x-a)(x-%xi)^n (openoffice) (http://mathworld.wolfram.com/CauchyRemainder.html)
The one that described here as Cauchy is Integral Form of Remainder, I think it needs to be corrected, please check.
71.132.107.150 02:15, 30 April 2007 (UTC) Andrey
- This article calls the integral form of the remainder "Cauchy" and the intermediate point form "Lagrange", while MathWorld has it the other way around. I checked two reference books, but neither book gives names to the forms. So I have not changed this article, yet. JRSpriggs 10:03, 30 April 2007 (UTC)
- The Lagrange form is correct. Source: Klein, Morris (1998) Calculus, Dover p. 639. (A much more definitive source than mathworld, in my opinion.)
- The particular form of R given in (16) was derived in 1797 by Joseph-Louis Lagrange...
- He doesn't use what we are calling the Cauchy form, but I'll check Apostol's text later for confirmation. Silly rabbit 13:25, 30 April 2007 (UTC)
- Apostol agrees with Misplaced Pages as far as the Lagrange form of the remainder (Tom Apostol, Calculus Vol 1, p. 283.) However, his version of the Cauchy form is (p. 284):
- where ξ is some number between a and x (see article). So Andrey would appear to be correct on this point. That said, the integral form given is, in a certain sense, more primitive than either of the others since each follows by the mean value theorem for Riemann-Stieltjes integrals (which is not precisely the way Apostol does it, but it is equivalent).
- I propose that we call the integral form of the remainder the integral form of the remainder. The Cauchy form, as Andrey has correctly pointed out, shall be called the Cauchy form. The Lagrange form will stay as it is. Silly rabbit 22:57, 30 April 2007 (UTC)
Taylor's theorem approximation
I suggest the following formula approximation also be incorporated into the article;
where
and
--Zven 00:30, 14 July 2007 (UTC)
- It is already there except for the label which would add nothing. JRSpriggs 03:18, 14 July 2007 (UTC)
- It is more about clarity and the distinction between the taylor series approximation and which includes the higher order error terms. The article starts off by stating it as an approximation, and gives an example for the approximation of e to the n term. Looking at other internet sources (and three textbooks)
- http://www.iwu.edu/~lstout/series/node3.html
- http://www.maths.abdn.ac.uk/~igc/tch/ma2001/notes/node46.html
- wolfram ← a poor definition although it does state Taylor's theorem (without the remainder term) was devised by Taylor in 1712 and published in 1715
- http://planetmath.org/encyclopedia/TaylorsTheorem.html
- http://www.math.hmc.edu/calculus/tutorials/taylors_thm/
- It is more about clarity and the distinction between the taylor series approximation and which includes the higher order error terms. The article starts off by stating it as an approximation, and gives an example for the approximation of e to the n term. Looking at other internet sources (and three textbooks)
- there is certainly the two ways of specifying it. I suggest that the approximation
formulatheorum goes into the first sentence, then the article can go into details about the error term . --Zven 08:26, 14 July 2007 (UTC)
- there is certainly the two ways of specifying it. I suggest that the approximation
- Well, I am sorry to have to disagree with you. The article is clear as it stands, but your proposed language would make it unclear. You make it sound like f is being defined by the formula. It is not. JRSpriggs 03:46, 15 July 2007 (UTC)
- I just think that historically taylors theoum is an approximation defined for example def 3.1 in http://www.iwu.edu/~lstout/series/node3.html for n+1 continuous derivatives. I was just suggesting that be included in the article somewhere --Zven 10:56, 15 July 2007 (UTC)
- Umm... have you read the Misplaced Pages article on Taylor's theorem? Silly rabbit 11:13, 15 July 2007 (UTC)
- They are just Just typos. I just thought notation was common and could be applicable in this article. --Zven 19:20, 15 July 2007 (UTC)
- Actually, I think it might do some good. I notice that Taylor polynomial currently redirects here, and it is certainly worth talking about Taylor polynomials in more detail (even separately from their specific relationship to Taylor's theorem). I'm not sure how to handle this. Right now, I like the carefully formal approach of each version of the theorem. The article might, however, benefit from an Introduction organized around the imprecise notion that "a function can be approximated by a Taylor polynomial on small enough intervals." The formal details and statement can remain as they are, but for the casual reader at least there will be some meaningful (though not very useful) statements. I need some further input though. Silly rabbit 19:55, 15 July 2007 (UTC)
- I realise I didnt come across well, but that is exactly what I mean, there is no section on Taylor polynomials --Zven 07:51, 17 July 2007 (UTC)
- I think there is definitely some value in the idea. However, it may be better to put it in the article on Taylor series. After all, the Taylor polynomials are partial sums of the series; furthermore, the article on Taylor series is more low level. -- Jitse Niesen (talk) 08:07, 23 July 2007 (UTC)
- I have already been bounced from there after suggesting it on the talk page. It certainly is applicable to one of the two articles --Zven 01:51, 24 July 2007 (UTC)
- I think there is definitely some value in the idea. However, it may be better to put it in the article on Taylor series. After all, the Taylor polynomials are partial sums of the series; furthermore, the article on Taylor series is more low level. -- Jitse Niesen (talk) 08:07, 23 July 2007 (UTC)
- They are just Just typos. I just thought notation was common and could be applicable in this article. --Zven 19:20, 15 July 2007 (UTC)
- Umm... have you read the Misplaced Pages article on Taylor's theorem? Silly rabbit 11:13, 15 July 2007 (UTC)
(de-indenting) Sorry, I didn't know that. Looking at the two articles Taylor series and Taylor's theorem together, I still think that the Taylor polynomials fit better in the former article, for instance in the "Definition" section. The redirect Taylor polynomial should then be changed to point to Taylor series. Any objections? -- Jitse Niesen (talk) 12:27, 24 July 2007 (UTC)
Comments
To my mind this article suffers from a serious stylistic flaw: it is supposed to be about a theorem ("Taylor's Theorem"), but nowhere in the article is there a theorem stated: i.e., a crisp mathematical statement which clearly enunciates that certain hypotheses entail a certain conclusion. (This occurs despite the fact that all the content is here.) As a result, I am already confused by the first two introductory sentences of the article: exactly what is the statement that Taylor stated in 1712 but was anticipated by Gregory? Is it:
(i) For every n, there is a unique polynomial of degree at most n which matches the value of f and its first n derivatives at a, namely ?
(ii) As above, plus: the difference between a function and its Taylor approximation is O(x^(n+1)) as x -> a, provided f is n+1 times differentiable at a?
(iii) As above, plus: the remainder R_{n+1}(x) is of the form ?
I am aware that all of these statements are sometimes loosely called "Taylor's Theorem" by various people. But an encylopedia article needs to be more precise, particularly when discussing the history of what was proved.
To me it would seem preferable if Taylor's theorem were said to be (iii) with the Lagrange form of the remainder. (E.g. Planetmath does this.) The alternate forms of the remainder should still appear as variants (ideally with some commentary on why there is more than one form and what the history is concerning this...)
In the same vein, I don't like the way the discussion begins with "A simple example of Taylor's theorem is..." An example of a theorem should be a particular case of the theorem, but one cannot tell from the discussion what the theorem is supposed to be: exactly what statement is meant by the squiggly equality of e^x and its Taylor polynomial? Plclark (talk) 10:22, 20 November 2007 (UTC)Plclark
What IS the theorem
Can someone add the statement of the theorem, which, unbelievably, is lacking.
- It is the statement in the article immediately following the text:
- "The precise statement of the theorem is as follows:".
- This is the second sentence of Section 1. --Lambiam 11:54, 5 March 2008 (UTC)
- I agree with the anonymous (it seems) person that posed the question. There is no statement of any theorem without saying something precise about the remainder function Rn; without it one can just define that function by the given equation. So every statement restricting the remainder leads to a different version of Taylor's theorem. Marc van Leeuwen (talk) 06:44, 15 March 2008 (UTC)
There is another difficiulty I have with the the given statement: it implicitly fixes x by putting it in the bounds of the interval given in its hypothesis; therefore the theorem states only something about one x at a time. While it is possible to do so, it seems to me contrary to the spirit of the theorem. One wishes to give an approximation to the function f on some interval, and show that is a function constrained in some way. (This does not exclude fixing x in a proof of the theorem, and of course the statement about will refer to the specific value of x.) I would say the satement should go like "Let f be a function defined on some interval… then for all x in this interval…". Marc van Leeuwen (talk) 07:09, 15 March 2008 (UTC)
Response and proposal
Back when I put major work into this article, I had wanted to change it as well for much the same reason. However, I think the current version does have an advantage over most statements in that the point a around which one is doing the Taylor expansion need not be an interior point of the domain. Another advantage of the present version is that it emphasizes the role of the error term (rather than the precise domain of definition). This is, I believe, more how Taylor's theorem is invoked in its most common applications to numerical analysis. Ultimately, in my own mind the advantages and disadvantages of the approach in the article balanced, and so I didn't think it worthwhile to change it to something else. Now that there are more minds on the task, allow me to propose the following modification for discussion:
The precise statement of the theorem is as follows: If n ≥ 0 is an integer and f is a function which is n times continuously differentiable on the closed interval and n + 1 times differentiable on the open interval (a, b), then for all x ∈ ,
Here, n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. The remainder term Rn(x) depends on x and is small if x is close enough to a. Several expressions are available for it.
Please include responses here. Cheers, silly rabbit (talk) 14:37, 17 March 2008 (UTC)
- Several unrelated issues:
- We should separate out the "simple example" to a little section on its own.
- If we choose not to make x an endpoint of the interval, is it perhaps possible also not to make the point around which the series is expanded (a above) necessarily an endpoint of the interval ()? That would correspond even better to how the theorem is often used. Renaming the interval , I'd like to see u ≤ a ≤ v. We must avoid straying too much from published versions, though.
- The boxed formulation does not address the complaint that a precise statement of the theorem requires a precise statement about the remainder term. Perhaps we can do something along the following lines, given here in telegram style:
- The statement of the theorem involves a remainder term R denoting the difference. There are several versions of the theorem, which differ in the form of R. For a given remainder term R, the statement is as follows: ... . Here is a list of forms for R: ...
- It should be made clear that for fixed n and fixed interval, ξ in the various forms still also depends on x.
- --Lambiam 07:59, 18 March 2008 (UTC)
Lack of precision
Taylor's theorem
— In calculus, Taylor's theorem gives a sequence of increasingly better approximations of a differentiable function near a given point by polynomials (the Taylor polynomials of that function) whose coefficients depend only on the derivatives of the function at that point. The theorem is named after the mathematician Brook Taylor, who stated it in 1712, even though the result was first discovered 41 years earlier in 1671 by James Gregory.
Actually, this is not necessarly true for real function, even for a smooth function. For some smooth function the higher order approximations can be worse than the lower ones. Lechatjaune (talk) 13:52, 17 March 2008 (UTC)
- Indeed! I have changed the lead sentence in response to your objection. I'm not entirely happy with the wording, so please make changes accordingly. Thanks, silly rabbit (talk) 14:38, 17 March 2008 (UTC)
True statement?
The remainder term Rn(x) depends on x and is small if x is close enough to a. Isn't exp(-1/x^2) when a=0 a famous counter example? -- Randomblue
- No. For that function, all term in the Taylor polynomial are zero, so the remainder term is just exp(-1/x^2) itself. This is a small number when x is close to 0. For instance, if x = 0.1 then exp(-1/x^2) = exp(-100) is very small indeed. -- Jitse Niesen (talk) 10:24, 16 April 2008 (UTC)
analytic is analytic
At the end of Estimates of the remainder it says "This makes precise the idea that analytic functions are those which are equal to their Taylor series." Since analic functions are defined as those which are (locally) equal to their (convergent) Taylor series, this seems to be saying that analytic functions are analytic. Unless I am missing something, this sentence should be removed (and maybe the preceding argument as well). Marc van Leeuwen (talk) 12:24, 10 May 2008 (UTC)
- The paragraph gives an important condition for the analyticity of an infinitely differentiable function, so my feeling is that it should stay. This condition is mentioned prominently, for instance, in the textbook by Ahlfors. Of course, to someone with a little experience, it can be deduced almost immediately from facts about convergent power series. But I think it should stay because it gives a condition for analyticity purely in terms of estimates of the derivative, which arises in a rather nice way out of Taylor's theorem. I am lukewarm about the sentence you are balking at, and always have been. I think we should solicit suggestions on how to change it, but I feel that the paragraph is naked without some kind of concluding remark to tie it off. silly rabbit (talk) 13:38, 10 May 2008 (UTC)
- Hmm… you made me actually think about what the preceding text is trying to say, and although I am not at all specialised in analysis I found it necessary to add some text to have it make proper sense; I hope this was the sense intended. Still I'm convinced that the conclusion (starting "In other words…") is not the conclusion to what precedes, but rather something opposite and fairly trivial. Infomally, for me the preceding text says, if one forces all those thumbscrews (uniform bounds on the same interval on all derivatives with constants that cooperate nicely) onto our unwilling C-∞ function (since such functions are hardly ever analytic), then it has to give up resistence and concede to being equal to (the limit of) its Taylor series. But the conclusion given only says that analytic functions do satisfy those restrictions, which is hardly surprising because they are already given by a convergent series to begin with. Marc van Leeuwen (talk) 12:26, 11 May 2008 (UTC)
- I have deleted the last sentence of the paragraph. I think I know why it was added in the first place, because the previous mention of analytic functions was somewhat imprecise and unsatisfactory. I will try to correct this problem as well. silly rabbit (talk) 12:36, 11 May 2008 (UTC)
- I've undone the first of your sequence of edits (more or less). No hurt intended, but in the previous part there was only one n for which the existence of a constant was required; just being infinitely differentialble does not imply those constants also exist, so the sentence should contain such a turn. Marc van Leeuwen (talk) 18:43, 13 May 2008 (UTC)
- Yes, being infinitely differentiable on a compact set does imply the existence of these constants. But now that I am mindful of your concern, I will try to restate this is a somewhat less awkward way. silly rabbit (talk) 19:54, 13 May 2008 (UTC)
Limitations of Taylor's Theorem:
Taylor's theorem can not be applied for to a function at a point where the function is not continuous.For example we can not apply Taylor's theorem to the step function f(x)=1 for x ≥ 0,-1 for x ≤ 0 at x =0.— Preceding unsigned comment added by 132.235.44.240 (talk • contribs) 17:06, 11 October 2010 (UTC)
G-Taylor?
- (The following discussion has been copied by me from Misplaced Pages talk:WikiProject Mathematics Paul August ☎ 20:16, 8 December 2010 (UTC))
An IP has repeatedly added a mention of theorem called "G-Taylor", an apparent generalization of Taylor's theorem, to Taylor's theorem (e.g. ). I removed it once as, at first glance, it seemed probably not significant enough (yet?) to be included in that article. Anyone else have an opinion on this? Paul August ☎ 18:11, 6 December 2010 (UTC)
- The article is published in a vanity press, so it's not appropriate for Misplaced Pages, even if we ignore the (obvious) WP:COI issues. CRGreathouse (t | c) 18:48, 6 December 2010 (UTC)
- Even if it were published in a reliable journal, Taylor's theorem is at the level that it is well covered by textbooks. A Google scholar search for "Taylor's theorem" finds about 8670 matching articles, and even searching for those words in the article titles returns over 100 hits. So obviously there's so much material available that we can't cover every detail of what's known about Taylor's theorem here, only the significant aspects of the subject, and I would want to see some evidence of significance of this result (e.g. high citation counts or coverage in a textbook) rather than mere publication before we include it. Since it's not even reliably published yet, obviously it falls far below that standard. —David Eppstein (talk) 01:19, 9 December 2010 (UTC)
CRGreathouse, removed the mention of "G-Taylor", another (or the same editor using a different IP) added back the mention, I removed it again (asking for a discussion on the talk page) and the original IP has restored it again (with no discussion). Do other editors wish to weigh in? Paul August ☎ 18:08, 7 December 2010 (UTC)
- I've filed a RPP; asking for temporary, semi-protection citing an edit war. () — Fly by Night (talk) 18:31, 7 December 2010 (UTC)
The page has been protected for a week. The protecting admin said that one week would give enough time for the problem to be resolved on the article's talk page. I suggest we carry on this discussion over on the article's talk page. That way, we'll have something in writing, attached to the article itself, in terms of consensus. — Fly by Night (talk) 13:52, 8 December 2010 (UTC)
- Fine. I will copy the above discussion to Talk:Taylor's theorem. Paul August ☎ 20:16, 8 December 2010 (UTC)
(End of copied text)
In addition to myself and CRGreathouse, Algebraist (as well as several IP's (24.6.3.41, 71.230.169.109, 131.171.70.159, 76.94.218.197 -- with the comment "2010 article is not notable") has removed the mention of this theorem from the article. My thinking was that, leaving aside the status of the publisher of the cited article being a "vanity press", that, on the face of it, a 2010 publication is probably too new to be notable enough to warrant mention here. Paul August ☎ 20:16, 8 December 2010 (UTC)
- I don't think I agree with that reasoning; there are 2010 papers that I would add to articles. But the apparent COI is troubling (look at the other contributions) and the journal is not only non-peer-reviewed but actually a vanity press. That's at least three Misplaced Pages guidelines violated right there, yes? CRGreathouse (t | c) 18:06, 9 December 2010 (UTC)
- COI is simply a red flag, much like the fact that article was only recently published, neither would rule out the result being significant enough to warrant a mention, but both raise suspicions. On the one hand, about the objectivity of the editor's judgement, the other about the lack of time for the result to achieve sufficient notability. I know nothing about the publisher -- from where are you getting that it is a "vanity press" ? -- If so that would make the the source unreliable and preclude inclusion. In any case, David's point above about lack of "citation counts or coverage in a textbook" is telling. Paul August ☎ 18:52, 9 December 2010 (UTC)
- They're a print on demand publisher, a more respectable term for a vanity press. Also, the "fast review process" in the publisher's description of their journals is a big red flag for "not seriously peer reviewed". —David Eppstein (talk) 19:04, 9 December 2010 (UTC)
- Thanks David. Paul August ☎ 20:05, 9 December 2010 (UTC)
- Precisely. I would add that it appears that their "fast review process" is, in fact, no review at all, peer or otherwise -- but I suppose that isn't central to my point. CRGreathouse (t | c) 20:01, 9 December 2010 (UTC)
- By the way -- I have nothing against POD services, I just don't want to confuse them with peer-reviewed journals. CRGreathouse (t | c) 20:02, 9 December 2010 (UTC)
- They're a print on demand publisher, a more respectable term for a vanity press. Also, the "fast review process" in the publisher's description of their journals is a big red flag for "not seriously peer reviewed". —David Eppstein (talk) 19:04, 9 December 2010 (UTC)
- COI is simply a red flag, much like the fact that article was only recently published, neither would rule out the result being significant enough to warrant a mention, but both raise suspicions. On the one hand, about the objectivity of the editor's judgement, the other about the lack of time for the result to achieve sufficient notability. I know nothing about the publisher -- from where are you getting that it is a "vanity press" ? -- If so that would make the the source unreliable and preclude inclusion. In any case, David's point above about lack of "citation counts or coverage in a textbook" is telling. Paul August ☎ 18:52, 9 December 2010 (UTC)
- One can follow the link which is still in the article to a statement of the theorem. It is merely Taylor's theorem applied to the composition of f with the inverse of g between the points g(a) and g(b). To say that Taylor's theorem and the Mean Value Theorem are special cases is misleading, as one most likely needs to use these theorems, or analogous results, to prove the "g-Taylor" theorem. Without any evidence that someone finds the identity useful, or that the result is somehow important, I'd say that this particular identity does not merit any mention in the article. 70.22.103.189 (talk) 13:11, 17 December 2010 (UTC)
- I agree that "G-Taylor" is out of place here. It is a simple consequence which might be useful in some contexts, but putting it on-par with a such a central principle in real analysis shows lack of perspective, and more importantly, only confuses the readers. Lapasotka (talk) 09:30, 6 April 2011 (UTC)
Assumptions and refinements
It is not mentioned anywhere what is really needed for the Taylor theorem (whatever it is) to be true. In this form this article is mostly about the ways of writing out the remainder under some additional regularity assumptions of the given function. How about starting with the simplest version, which really is only the iterated version of the definition of a derivative.
Theorem: If the function is k≥1 times differentiable at , then one has the Taylor expansion
where is some (not necessarily continuous) function with .
After this one can make further assumptions (such as f is k+1 times continuously differentiable etc.) and explain the different ways of expressing the remainder. Lapasotka (talk) 01:58, 4 April 2011 (UTC)
- Hear, hear! This article has long been in the lamentable state of being called a theorem without stating any clear theorem. I've always thought it should be called "Taylor's formula", because that is the only thing everybody seems to agree about what it is. But the version you mention is certainly the one that should be called Taylor's theorem (although I'm somewhat doubtful whether Taylor ever stated anything like this). Marc van Leeuwen (talk) 07:43, 4 April 2011 (UTC)
Article too long and repetitive
Does anyone else think that this article is too long and repetitive? I am about to delete the "Motivation"-section, which does not serve any purpose in this form. Also the proofs are too long and should be cut down to maybe one-fifth. My general impression is that there is too much focus on Taylor's series, which has its own article, and too little focus on other aspects of Taylor-polynomials. For instance, nothing is mentioned about the concept of "k-th order differentiability" of multivariate functions and interchangeability of mixed partial derivatives. Lapasotka (talk) 21:23, 6 April 2011 (UTC)
- I don't think it is too long, but it is too repetitive.
- I like the idea of a motivation section first but as it stands it doesn't seem to accomplish much. I see no reason to include the same equations twice and I don't think that starting with an example Taylor expansion (of exp(x)) helps much, if at all. A short paragraph of words (without equations) motivating the Taylor Theorem. Something along the lines of: Taylor's series is a very useful way to approximate functions. Taylor's theorem show when the Taylor series can be used and how to estimate the error, etc.. It is useful X-field of mathematics because of Y... Some of the last paragraph of the lead may fit better here. I know I am destroying this, I am not a mathematician. As a physicist this is the type of thing I would like to know, what does Taylor's theorem do to help us understand and use Taylor's series, plus why do mathematicians care about Taylor's theorem. (Feynman said that you don't know physics unless you can explain it to a 'barmaid'. Perhaps we can say that a mathematician doesn't know math unless she can explain it to a physicist. At times I certainly feel like a 'barmaid' reading the math articles.)
- A suggestion about the proofs is that you isolate them and put them in a automatically hidden proof box (that you click to expand). There is a template around here somewhere for it. I am not a big fan of such boxes, but someone went through a fair amount of trouble to type in the proofs and there will definitely people who will want and/or benefit from having the proof available if and when they want it.
- I suspect that you are correct that it has too much focus on Taylor's series and duplicates too much of it. If you get rid of that material, though, I would appreciate it if you make tighter links between the two articles. It is too easy to confuse one with the other. So that if you don't make it obvious enough (in all of the appropriate places) that there is another article called Taylor's series and it covers X, Y, and Z which will only be summarized here, then I guarantee that all your hard work cleaning up the article will soon be undid by someone who wants adds X, Y, and Z and probably badly.
- ""k-th order differentiability" of multivariate functions and interchangeability of mixed partial derivatives" seems like something that should be discussed in detail in another article like multivariable calculus or partial derivatives. A summary and a link to the appropriate article would be greatly appreciated.
- Finally, anything that you can do to either write out the math shorthand in English words or eliminate when not needed would be greatly appreciated. I think I know some of it such as f: R -> C (is that a function of a real number that returns a complex number or is it the other way around or something else.) and a member of, but I have a hard time remembering the different symbols for the different types of 'numbers'. I know that R is real and R^n is a real space in n-dimensions (or close enough I hope), but the others...
- Sorry for the length of this response, but I hope that it helps. TStein (talk) 22:37, 6 April 2011 (UTC)
Thank you for the very constructive response! It is especially useful to hear opinions of people working on slightly different fields. Could you come up with a good short example of Taylor's theorem (perhaps only in words) in intermediate level physics? That would make the motivation section more substantial. Another great topic for the motivation section would be the curvature of plane curves. There are also some examples from numerics, but they are probably too technical to be appreciated by the general audience possibly visiting this page. BTW, what I referred to in the interchangeability of mixed partial derivatives is the following. If you look at the Taylor's theorem (without estimates) carefully you might appreciate it as an iterated version in (n=m=1) of the following general definition.
- The derivative of a mapping F:R→R at a∈R is a linear mapping (an m×n-matrix) A:R→R, if there exists a function h:R→R such that
In this sense Taylor's theorem is a truly fundamental result which explains what it means for a function to be "k-times differentiable". (In jargon, it characterizes the existence of higher order derivatives.) Of course these kind of statements are shot down within milliseconds as POV inside the article, but at least the connection is too intimate (and perhaps not obvious enough) to be left out of discussion. For the mixed partial derivatives, they can always be interchanged if the function is k times differentiable in this iterated sense, because then they correspond to xy- and yx-terms in the Taylor-polynomial, which are the same by the second part of Taylor's theorem. Lapasotka (talk) 09:46, 7 April 2011 (UTC)
- I cannot think of an example where Taylor's theorem is used in intermediate physics except implicitly where Taylor's series is used and even then it is usually the first term that doesn't cancel out. Taylor's expansion is used extensively in geometrical optics using the small angle approximation that sin(theta) = theta and cos(theta) = 1. The cubic term in the sign is used for aberrations. Outside of optics the most frequent use is to show that two expressions are equivalent for small deviations from a number (usually zero). Example of this is showing the the relativistic expression of the kinetic energy agrees with the classical form of the kinetic energy for speeds that are 'small' compared to c. Another example in electrostatics is that for small enough lengths the electric field of a line charge reduces to that of a point charge. Another example of an expansion is using the generator function of the Legendre polynomials to expand the electric potential of an arbitrarily shaped charge distribution using multipole expansion.
- I don't think that any of these examples are directly related to Taylor's theorem. A physicists coming to this page will either do so because of an accident (not knowing that Taylor's series exist) or because they are interested in the finer details of why, how, and what are the limitations of the Taylor's series.
- I am sorry, but I was unable to strictly follow your mathematical argument in your response; I think that is because mathematicians and physicists speak with a different mathematical dialect. I know enough that I think I know what you are trying to say though and with a little time I could translate. I think the idea that the Taylor series can be used to define when something is differentiable is quite interesting. Although that leads to the question that does expanding the function in other basis sets (like the orthogonal Legendre Polynomial (over the range -1 to 1) or sines and cosines like the Fourier expansion also lead to similar yet different definitions of differentiability. I know the Fourier series in particular can 'handle' a finite number of discontinuities. TStein (talk) 17:55, 7 April 2011 (UTC)
Kinetic energy in special relativity is a good example from the physics side, and maybe physical pendulum is another good one, since it is not too Star Trek. To be precise with the "characterization of higher differentiability" in the single variable case, what I mean is
- Theorem. A function f:R→R is k times differentiable at a∈R if and only if there exists a polynomial such that
- and then p is exactly the k-th order Taylor-polynomial of f at a.
This is almost but not quite the traditional Taylor's theorem. It is interesting that you mentioned Fourier-transform in this context. You might want to take a look at Bessel potential spaces in Sobolev spaces. Intuitively, they characterize differentiability of functions by the decay rate of the Fourier-coefficients as the frequency tends to infinity. Lapasotka (talk) 19:10, 7 April 2011 (UTC)
- I'd just like to put in my point of view, which is that there is absolutely no relation between Taylor's theorem and Taylor series, and suggesting so is comforting the reader in a very easily acquired misunderstanding. Taylor's theorem is about quantifying to which extent the polynomial with n-th order contact to a function at a approximates that function near a; it trivially reduces to a statement of possible growth near a of a function with n-th order contact to the zero function. The terms of the Taylor expansion are just smoke to distract from the essential point, the remainder term. Taylor series on the other hand are a way to organise the derivatives of an infinitely differentiable function in a into a (formal) power series. No regularity requirements on the function at all will cause the Taylor series to converge, or of it does to converge to the function (outside the point a); that is just plain false. The only thing one can say is that if any power series converges to the function (i.e., it is analytic, which is not a regularity condition for a function of a real variable), it must be its Taylor series; this is a consequence of a trivial computation of derivatives to convergent power series at a, not of Taylor's theorem. In particular Taylor's theorem cannot be construed to imply the quality of "approximation by Taylor series" outside a, except for analytic functions where that is a tautology (and the "approximation" actually an equality). That important examples of functions turn out to be analytic is somewhat of a mystery (given their sparseness among C-infinity functions), for which more or less philosophical explanations can be given, but this is outside the scope of Taylor's theorem, in any form. I've said this strongly to make the point clear; don't try to make this article say something it should not. This being said I am quite sympathetic to the improvements that are being made to this article. Marc van Leeuwen (talk) 08:33, 8 April 2011 (UTC)
Thank you Marc van Leeuwen! In my second last edit I rewrote the section "Relationship to analyticity" before reading this comment. (I suppose we were typing at the same time.) I tried to drive your point home, though less eloquently :) Talking about "the Mystery", I suppose the reason is that "most important examples of functions" arise as solutions of differential equations with polynomial coefficients, and hence are analytic. Lapasotka (talk) 10:07, 8 April 2011 (UTC)
Proofs
I added a proof for the actual Taylor's theorem without any assumptions on differentiability outside the point a. It is kind of brute force, but the older and nicer proofs had another serious flaw -- they didn't prove the theorem. I am not very experienced with Misplaced Pages's markup language, so the current version has some typographical errors in addition to the mathematical ones. I will try to leave this page alone for a while and let other people do their share, but I will read the talk page. Lapasotka (talk) 21:35, 8 April 2011 (UTC)
Mean value form
There have a large number of recent edits by HasnaaNJITWILL. Though clearly in good faith, they do not seem to improve the article very much as is. I don't think mentioning the mean value theorem in the lead as it is done now, even with a citation from a book, is understandable to readers. Also the section 1.2 "Another expression... " would seem more in place as a remark under "mean-value forms of the remainder" above. It bothers me that it refers to the n=0 form of Taylor's theorem, since there is no such case (it starts at once differentiable). Maybe the mean-value forms are valid for k=0, but the text does not say so. For now I won't undo the sequence of edits (which would be the easiest way to tidy up, but I don't want to undo all first edits of the new User:HasnaaNJITWILL), but these issues should be addressed rapidly. Marc van Leeuwen (talk) 07:31, 11 April 2011 (UTC)
- I strongly agree. I sent a message about Show preview -button on his user page discussion and noticed that there were no welcoming messages at that point, and now I added the standard welcoming message with helpful links after your message. I have put some considerable effort on cleaning up this page recently, but I will follow your example and let the edits by User:HasnaaNJITWILL stay there for a little while. I have also another argument against forcing the POV that "Taylor's theorem can be regarded as the generalization of the mean value theorem into higher order derivatives." Namely, while this is true in the one-dimensional case, the whole mean value theorem is blatantly false if the function is either multivariate or vector valued. (Use your geometric intuition -- this is not a "technical issue".) Saying such things in calculus books is bad pedagogy, since many calculus students also study and possibly apply multivariate calculus in their future work. Taking it to the extreme, these kind of misconceptions might even cost lives, if they stay lurking in the backs of the heads of, say, engineers or nuclear physisists. Lapasotka (talk) 08:17, 11 April 2011 (UTC)
Multivariate case and organization of proofs
User talk:Sławomir Biały, there were a couple of reasons why I changed the proof for the multivariate case. One point is that the one I gave had a clear reference, but the more important one is that the old proof (which you restored) kind of pushed under the rug the crucial step by waving hands with multinomial coefficients, which by the way, should be binomial coefficients. Also, the indexing of k in the statement of the theorem was meant to be same than with the single variable case. Note that with k-th order differentiability at a point it really happens that the remainder term has same powers of (x-a) in the remainder term as in the highest order term in the Taylor's polynomial. This holds for both the single variable case and the multivariate case. I think it is better to use a notation consistent with the versions of the theorem with integral etc. expressions for the remainder. As a final point, the organization of the proofs was not very satisfying as it was before you moved all of them into the end, which was an improvement. However, I think the best thing to do is the following.
- Add a decent motivation section (not like the one that was there a week ago) and iterate with the integration by parts and fundamental theorem of calculus there to derive the correct form of the Taylor polynomial. This already proves Taylor's theorem with the remainder in the integral form. As far as I see, all the other proofs need an "educated guess" and an induction argument.
- Prove the full multivariate Taylor's theorem in detail in a subsection of the multivariate section.
- Remove the full proof of single variable Taylor's theorem, since it is quite technical, and do only the mean value forms of the remainder in the single variable section assuming the main theorem, which is proved later on in the full generality. This would be also good for the purpose of pointing out that the mean value versions are intimately single variable results, and do not really generalize.
I am really glad to see you taking interest in this page and I am eager to hear your comments. Let's finally fix this mess of an article. Lapasotka (talk) 19:38, 11 April 2011 (UTC)
- Our objective isn't really to give complete proofs of every result (Misplaced Pages:WikiProject Mathematics/Proofs). I think that it is better to give the proofs that have an important highlight for understanding the subject: thus the Cauchy mean value theorem is important for one proof, integration by parts is important for another, restricting to a line is important for yet another one. These are somehow "iconic" proofs in the subject, and so they are important for a complete encyclopedia article. Having a proof of the general version of Taylor's theorem currently stated in the article seems to me to be much less so. I would be happy to see it removed from the article, but I don't care very strongly about it.
- As for the specific point about the proof in several variables, it's true there is some hand-waving in using the multinomial coefficients, but generally we prefer short, possibly incomplete, proofs that convey the main idea, rather than insisting on proofs that hammer out every detail at the expense of concealing the main ideas. The main idea here is a simple one: apply the one variable version of Taylor's theorem on line segments, and I think that needs to be emphasized above other considerations. The details then follow by a routine turn of the crank. As I see it, that's the essence of the proof. Without a doubt, there are also sources for this proof (for instance, Hoermander, The analysis of linear partial differential operators, Vol. I, p. 12–13). A similar technique will also give the mean value form of the remainder. This seems to be emphasized by more elementary sources (e.g., Apostol).
- A decent motivation section is needed. What I think would also be helpful is a good example. I have taken some steps to change the emphasis of the existing example from estimating the value of e to approximating the function on a whole interval to a desired accuracy, since the statement of the theorem (and the estimates of the remainder) are more directly relevant to that case. Sławomir Biały (talk) 20:33, 11 April 2011 (UTC)
The new proof from the pseudodiff book I happened to have at hand also reeked of line segments miles away, but nevermind. Somehow I feel that the proof of the general multivariate case might serve many purposes, though. For example, it would illustrate the differentiability of partial derivatives at a point and interchangeability of mixed derivatives, which is quite central for the understanding of the theorem itself.
What I meant by "mean value versions being one-dimensional results" is that all there is is already there in one dimension. They generalize to multivariate functions only by pulling back over paths. (Perhaps this technique should be mentioned explicitly?) Even then one evaluates the directed derivative along the path, which is somehow a bit of a turn-off. For the vector valued case (to be added some day) the mean value versions actually fail to be true, since the mean values for different components can occur at different points. The integral version is one-dimensional in a less severe sense, since it at least holds for the vector valued case. The general Taylor's theorem, however, has considerably more content in several variables. Lapasotka (talk) 21:45, 11 April 2011 (UTC)
- I agree that your proof ultimately relied on the same technique, but approached the matter from the other side as it were, which would make it much harder I think for someone not already familiar with the technique to guess where this function had come from. To an uninitiated reader, it will seem to have been pulled out of a hat. Sławomir Biały (talk) 22:04, 11 April 2011 (UTC)
Estimates of the remainder
I'm puzzled why the section "Estimates of the remainder" was removed. Originally, some of it was absorbed into the "Relation with analytic functions" section. But it seems to me quite important for numerical applications to have at least some discussion of how the remainder is estimated. This can be found, for instance, in just about any calculus textbook that discusses Taylor's theorem. I'm in the process of rewriting this, but I'd like to understand the reasons for its removal as well. Sławomir Biały (talk) 21:10, 11 April 2011 (UTC)
- I've put it back in, with a few changes. Obviously from a theoretical point of view, not much is gained by being able to obtain concrete numerical estimates of the remainder, and so from this perspective it is redundant with the various explicit forms of the remainder. But from a practical point of view, this can be useful sometimes, so it belongs there. Rather than removing it, it might be better to edit it until it is more satisfactory. Sławomir Biały (talk) 21:32, 11 April 2011 (UTC)
- Well, this estimate is a kind of trivial consequence of Lagrange's form of remainder, and I was about to include it there. I have a problem with the nomencalure, though. Isn't the actual Cauchy's estimate stronger than this one and applicable only for analytic functions? Lapasotka (talk) 21:45, 11 April 2011 (UTC)
- Your last edit on the uniform estimate was a great improvement. Thanks. Lapasotka (talk) 21:55, 11 April 2011 (UTC)
- (ec) I left the dubious moniker out of the revised version. Yeah, it's a trivial consequence of the Lagrange form, but one that is used so often in basic applications that I think it needs to get special emphasis. Sławomir Biały (talk) 21:59, 11 April 2011 (UTC)
Also why was the asymptotic notation removed? That seemed to me to be a worthwhile addition. Sławomir Biały (talk) 21:14, 11 April 2011 (UTC)
- The big O -notation turned the estimate it tried to re-express into something weaker. Namely, the original statement is a uniform estimate in a neighborhood of a, and almost exactly the (possibly misnomed?) "Cauchy's Estimate", but one power weaker, since one less derivative was assumed. The little-o notation was in principle fine, but I don't see the point to write it out because the line above it was precisely its definition. Those who know little-o see it immediately, and those who don't lose their focus. Lapasotka (talk) 21:45, 11 April 2011 (UTC)
- Allow me to interject a demand for explanation. It was me who introduced the big-O estimate while also increasing the exponent to k+1, and putting in the preceeding little-o for comparison. The main point is the big-O estimate (or it's equivalent) with exponent k is silly because it is immediately implied by the little-o estimation (little-o always implies big-O, and yes both involve a uniform estimate in an unspecified neighbourhood). So it seemed to me the whole point of refining the estimate is pushing little-o for k to big-O for k+1. Now I see the the exponent is brought back to k for the latter estimate. To me this is make the "considerable stronger" estimate trivially implied by the estimate given before. It really looks extremely silly to me right now. What am I missing? Marc van Leeuwen (talk) 05:27, 12 April 2011 (UTC)
- The Big-O notation is a kind of uniform estimate, but in an unspecified neighborhood. In this context,
- A true statement, but weaker than the one it was supposed to re-express, since the introduction of Big-O loses the information on which neighborhood of a the estimate is valid. The actual statement is the following, for comparison.
- The little-o in this context reads as
- which corresponds exactly to the limiting behavior of the remainder in the "crude version" of the Taylor's theorem. Perhaps the little-o should be restored, as I said above. I hope the situation doesn't look silly after this explanation. Lapasotka (talk) 06:07, 12 April 2011 (UTC)
- The Big-O notation is a kind of uniform estimate, but in an unspecified neighborhood. In this context,
- I remain confused. There may be some difficulty for k=1, but that should not be the case to concentrate on, so assume k≥2. Now in order to be k times differentiable at a, our function needs to be k−1 times differentiable on an open interval containing a, since otherwise there are insufficient values available to even define a k-th derivative at a. So in particular our function and the remainder term are continuous on that open neighbourhood of a, and so is hk(x)=Rk(x)/(x−a) (the only potential discontinuity is at a, but that is controlled by the "crude version" of the Taylor's theorem). Since continuous implies bounded on any closed subinterval of that open interval, one gets "there exists such that, for all x in this closed interval, " for free. So the "considerable stronger estimate" is in fact weaker. The "open to closed-subinterval" consideration is not vain, since C-infinity on the open interval , yet no constant C can exist (for any estimate at all) for the same interval (which shows the current statement in the article is actually false; your above formulation is even worse, since you say "for any r, but the function need not even be defined for large r). I think you should think about this. The above reasoning mainly serves to show that there is no point trying to push an asymptotic estimate to a concrete interval, since the "improved" form can either be obtained by increasing the constant under compactness consideration, or is simply out of reach (for open intervals). For me the general estimates should be asymptotic ones, and then the concrete forms (mean-value, integral) can of course remain as they are. Marc van Leeuwen (talk) 07:20, 12 April 2011 (UTC)
- There were some edits (not by myself) which might have obscured the original meaning. I rewrote the paragraph and re-inserted little-o and Big-O, and added a precaution that there is an estimate with a "better exponent" coming up. Nevertheless, there is a difference with the current one and the one in the section "Estimates for the remainder", which was added by User:Sławomir Biały. The first one says that
- if f is k times continuously differentiable on the closed interval , then |Rk(x)|≤C|x-a|
- and the second one says that
- if f is k+1 times continuously differentiable on the closed interval , then |Rk(x)|≤C|x-a|+1.
- These estimates have different applications. The second one is of course more sophisticated, but sometimes one actually wants to estimate the remainder of the highest possible order. My response on the Talk page assumed some benevolent interpretation and left the restriction of r so that f is k times continuously differentiable on the closed interval implicit. Please read the current version and let me know if you still find it questionable. I am sorry about removing the asymptotic notation. At the moment it seemed like a good idea (see the discussion above.) Lapasotka (talk) 09:43, 12 April 2011 (UTC)
- There were some edits (not by myself) which might have obscured the original meaning. I rewrote the paragraph and re-inserted little-o and Big-O, and added a precaution that there is an estimate with a "better exponent" coming up. Nevertheless, there is a difference with the current one and the one in the section "Estimates for the remainder", which was added by User:Sławomir Biały. The first one says that
- Now I'm even more at a loss of where this is going at. I'll not try to argue, but just mention two reasons why I find the current "considerable improvement" (which is the same as the "first one" just cited) silly (to say it politely) (1) the hypothesis k times continuously differentiable is much too strong, just continuous suffices to get from the little-o estimate to the existence of C, (2) the conclusion remains valid if one adds a random multiple of (x−a) to the remainder term, which trivially satisfies the same estimate; for instance one could make the the Taylor polynomial one term shorter, incorporating its final term into the error term instead. It is beyond me what could be the use of such an estimate for Rk, or what you mean by "sometimes one actually wants to estimate the remainder of the highest possible order". Order k+1 is higher than order k, so that is an argument in favour of the "second one", seems to me. Couldn't we just drop the vague (and unsourced) stuff about considerable improvement and go to the precise formulae right away? Marc van Leeuwen (talk) 14:38, 12 April 2011 (UTC)
- Something is definitely fishy, since the function h is automatically continuous(in a neighborhood) for k>1, possibly after modifying the value at a itself. The thread following this one is related. I won't have time to sort this out until much later today, though. Sławomir Biały (talk) 15:02, 12 April 2011 (UTC)
- Now I'm even more at a loss of where this is going at. I'll not try to argue, but just mention two reasons why I find the current "considerable improvement" (which is the same as the "first one" just cited) silly (to say it politely) (1) the hypothesis k times continuously differentiable is much too strong, just continuous suffices to get from the little-o estimate to the existence of C, (2) the conclusion remains valid if one adds a random multiple of (x−a) to the remainder term, which trivially satisfies the same estimate; for instance one could make the the Taylor polynomial one term shorter, incorporating its final term into the error term instead. It is beyond me what could be the use of such an estimate for Rk, or what you mean by "sometimes one actually wants to estimate the remainder of the highest possible order". Order k+1 is higher than order k, so that is an argument in favour of the "second one", seems to me. Couldn't we just drop the vague (and unsourced) stuff about considerable improvement and go to the precise formulae right away? Marc van Leeuwen (talk) 14:38, 12 April 2011 (UTC)
- I see. I was thinking primarily of the little-o, since this is a helpful mnemonic to someone with some familiarity with asymptotic notation, but who may not immediately make the connection. This is helpful to someone knowledgeable giving the article a quick scan, in my opinion. Perhaps the little-o should be restored and the big-O left out? Sławomir Biały (talk) 21:59, 11 April 2011 (UTC)
- That is a good idea. I am currently writing a proof for the general multivariate Taylor's theorem.( Too bad I don't know where to find it right now.) It should replace the single variable version and be placed in a "hidden box" or whatever the nice gadget for ugly content is called. BTW, do you have any good ideas for the Complex Analysis section? (I think Taylor's theorem is useless in complex analysis which is full of way stronger tools, which reduces my interest to elaborate it.) How about vector-valued Taylor polynomials? I think the fact that mean value forms of the remainder fail there should be mentioned somewhere. Lapasotka (talk) 22:18, 11 April 2011 (UTC)
- I would strongly advise against putting a proof of the general multivariate theorem in. It seems to me that the article already pushes our guidelines on proofs too far. An encyclopedia article is not supposed to replace a textbook or monograph on a subject. However, having a reference to a general proof would be helpful I think. Sławomir Biały (talk) 22:43, 11 April 2011 (UTC)
On statement(s) of the theorem
Good job for finding the reference! Too bad it doesn't mention the converse, which is easier, but very enlightening. Do you think we could state the converse as a "matter of fact" kind of way right after the theorem without running into trouble with OR? I am pretty sure there is a reference for a version with the converse. Is the converse mentioned in your reference at all? If it is, maybe we can even add it to the statement of the theorem itself and make a more precise citation? Lapasotka (talk) 10:47, 12 April 2011 (UTC)
- The converse wouldn't be true without added conditions on h, specifically that h needs to be k-1 times differentiable in a punctured neighborhood of a. I'd like to see a reference for that though. Sławomir Biały (talk) 10:52, 12 April 2011 (UTC)
Here is the proof for the "converse" from the proof you swithced to the l'Hopital -based one:
- To prove the converse, assume that f has an expansion of the form
- for some (k+1)-th order polynomial Pk+1 and some function hk+1 which tends to zero as x tends to a. Using the expression (k-1)-th order Taylor expansion for f we see that
- for some . Using the limits of hk and hk+1 as x approaches a we see that
- so hk is differentiable at a. Now we see from the (k-1)-th order Taylor expansion that f is also k+1 times differentiable at a.
Do you find a mistake here? Lapasotka (talk) 11:34, 12 April 2011 (UTC)
- There are obvious counterexamples. Just take h to be some discontinuous function in a neighborhood of a that is o(1) as x tends to a. Sławomir Biały (talk) 12:07, 12 April 2011 (UTC)
So h→0 as x→a. Discontinuity is not a problem as long as one has h(a)=0 (when h must be continuous at the point a). I hoped this assumption was obvious enough not to be stated explicitly. Note also that the claim is about differentiability at a point, not in some neighborhood around it. Do you have a more substantial counterexample or can you find an error in the proof? Of course a reference would be the best. Lapasotka (talk) 12:57, 12 April 2011 (UTC)
There are lots of functions which are differentiable at a point and not continuous in any of its neighborhoods. For example,
at the point x=0. Lapasotka (talk) 13:04, 12 April 2011 (UTC)
Too bad there is no mention about the differential expansion of a function in Misplaced Pages. At least in my old calculus books (unfortunately not written in English) there was always a theorem of the form
A function f:R→R has a derivative at a∈R if and only if it has a differential expansion at a, that is, there exists a function h:R→R and a constant c such that
and if this is the case, then c=f'(a).
This should be easy enough for anyone editing this page to verify, and already contains the bit you are worried about. Taylor's theorem says the same thing for higher order derivatives. This is also good material for the Motivation section. Lapasotka (talk) 13:27, 12 April 2011 (UTC)
- There are no functions that are discontinuous in every neighborhood of a point, but twice differentiable at the point. The definition of higher differentiability at a point requires existence of lower-order derivatives in a neighborhood of the point. Sławomir Biały (talk) 14:19, 12 April 2011 (UTC)
Yes, you are right, and the missing part in the argument was to show that the lower order derivatives of f exist near a. The proof only shows that f is "kind of k times differentiable" at a, in the same way that
is "kind of twice differentiable" at 0. In my head this g was "differentiable enough" at 0. This leads to a dilemma: If h from the example above is regarded as once differentiable at 0, why shouldn't g be regarded as twice differentiable at 0? Of couse this discrepancy arises from the iterated definition of higher order differentiability. Perhaps there is a canonical way (other than Taylor expansion) of defining higher order differentiability differently, so that g becomes as twice differentiable as h is once differentiable at 0. My guess is that it can be achieved using a single limit with higher order differences. Does anyone know about such calculus and about any references? If there are any, this matter should probably be mentioned in some subsection. Otherwise, for the purpose of this Misplaced Pages page, the case is closed for my part and we can just drop the converse statement. Lapasotka (talk) 20:37, 12 April 2011 (UTC)
- To expand Sławomir Biały's comment: for the sake of differential calculus, we only define the derivative of a function at interior points of the domain (in fact, one can also consider a function defined on a closed intervals and define the derivative at an endpoint; but that is in fact a right or left derivative).
- As a consequence, when we say that a function , defined on some open interval I, admits the k-th derivative at a point , we are also implicitly saying that it has all derivatives of order less than k, in a neighbourhood of . This because is by definition the derivative at of the function , so that the latter needs to be defined in a nbd of .
- If f admits the k-th derivative at , then it has a polynomial expansion of order k at , meaning that , where is a polynomial. For k=1 this is equivalent to the definition of derivability; but it is not as soon as k>1. For instance defined by has a polynomial expansion of order 7 at zero, though it is discontinuous at any , and only differentiable once at as in your example.
- That said, a nice converse of Taylor theorem does exist in the setting of k-times continuously differentiable functions; it is indeed a characterization of the class . Precisely: a function ( an open interval) is of class if and only if it has at any point a polynomial expansion of order k varying continuously wrto the point x, meaning that (for all and all such that )
- where is a polynomial in with continuous coefficients ; and is a continuous function in the pair such that for all . This is due to Marcinkiewicz and Zygmund (1936). The proof is not difficult and quite elementary; we may include a sketch of it if you like. It generalizes to functions of several variables, and to mappings between Banach spaces too.--pma 21:59, 12 April 2011 (UTC)
- This reminds me. A good account of some of the issues to do with Taylor polynomials (I believe including Marcinkiewicz Zygmund theorem) can be found in Stromberg's "Introduction to classical real analysis", if memory serves. I don't have a copy, but I will see if I can get it out of the library in the next few days. Sławomir Biały (talk) 00:45, 13 April 2011 (UTC)
- Just a side remark. Doing polynomial expansions and the "kind of higher order differential in a point" remind me of Don Knuth's advocacy of introducing calculus without doing limits, but big-O estimates instead. It is described in a "Teach Calculus with Big O" commentary that appeared in the Notices of the AMS, and for which an extended version can be found on his website as preprint Q171. (Somewhat unfortunately it is just a plain TeX source file, so you need to do-it-yourself to get nice output.) It is nice reading, though I think the proposal has not been followed up much. Marc van Leeuwen (talk) 05:41, 13 April 2011 (UTC)
I support the proposal of User:PMajer to do the converse in C. This should be separate from the single variable section which, I suppose, is intented as very elementary. I think it would be appropriate to do it in R→R. A section on R→R would also be a nice target link for differential geometry oriented articles. Lapasotka (talk) 08:10, 13 April 2011 (UTC)
- The currently stated form of Taylor's theorem in C(I) is grossly erroneous. For instance for I=R , f(x)=|x| and a=1 it states that f is in C(I) for any k, because the function hk that is zero for non-negative arguments and equal to 2x/(1−x) for negative x is continuous everywhere. The main point that is missed is that it does not suffice to consider a single Taylor polynomial, but a family of Taylor polynomials at each point of the interval, of which each of the coefficients should vary continuously with the base point, as correctly formulated by pm above. I am unsure though whether this is the proper thing to state at that point of the article, since it involves rather more complicated notions than either the previous or the following statements. So I will remove the erroneous statement shortly, unless the situation is repaired in a satisfactory manner. I would also like to say to Lapasotka, in all due respect for your efforts to improve this article, that it might be time to stop adding things before having studied these matters in detail in the literature, and making sure you understand the various issues at stake. Maybe you should just leave the article for some time, and allow editing by people with a different perspective. Marc van Leeuwen (talk) 09:50, 14 April 2011 (UTC)
- I removed the statement. The result is not even true locally in a neighborhood of a, since
- is automatically continuous if f is k times differentiable. Moreover, will automatically be (k−1) times differentiable in a punctured neighborhood of a. Sławomir Biały (talk) 12:08, 14 April 2011 (UTC)
- I removed the statement. The result is not even true locally in a neighborhood of a, since
That was sheer sloppiness from my part. I was going to write down the correct formulation by pm. Do you have any objections against including the correct version? Lapasotka (talk) 13:27, 14 April 2011 (UTC)
- I think it's not really suitable for this point in the article. If it belongs here at all, it would be in a section at the bottom on generalizations and extensions. Sławomir Biały (talk) 13:48, 14 April 2011 (UTC)
Motivation
That is fine. How about changing the pictures in the motivation to show several approximations for sin(x)? It would be more vivid and the article already has so many instances of the exponential function that someone might wonder whether Taylor's theorem is good for anything else. Lapasotka (talk) 15:22, 14 April 2011 (UTC)
- Sounds good. Sławomir Biały (talk) 15:26, 14 April 2011 (UTC)
What was your rationale for discarding the formal explanation how one arrives at Taylor expansions from the differential expansion? I included it in the motivation to make Taylor's theorem look like a very natural result, which one can easily derive by hand with a routine computation. The current form kind of hides the very simple mechanism, how I see it, for no particular reason. Of course this is also a matter of taste. Lapasotka (talk) 16:33, 14 April 2011 (UTC)
- I don't think the motivation section should emphasize the method of proof. That's going to be a turn-off to most readers who just need the theorem for an application. Sławomir Biały (talk) 12:25, 15 April 2011 (UTC)
I think the latest edit by User:Marc van Leeuwen should be shortened a little bit. Now there is almost more demotivation than motivation. Or should we change the section name ;) ? Lapasotka (talk) 12:19, 15 April 2011 (UTC)
- It's probably important to emphasize the limitations of Taylor's theorem somewhere, but doing so before the theorem is even stated seems premature. Sławomir Biały (talk) 12:25, 15 April 2011 (UTC)
- I disagree (obviously). Though it is maybe a bit on the long side, I think it is crucially important to understand what Taylor's theorem sets out to do. Somewhere we all want to believe that Taylor polynomials approximate the function value outside the point a (I'm sure this is what Taylor wanted to believe) and only after having killed this hope as an impossible mirage can one start to appreciate what Taylor's theorem does have to say. Marc van Leeuwen (talk) 12:44, 15 April 2011 (UTC)
- It's a little strange to have a section asking the reader to note something said and not said in a theorem that hasn't even been stated yet. I can only imagine what a reader never encountering Taylor's theorem would make of such a paragraph. Furthermore, I think it is easier to understand what this is saying if it is informed by examples: say 1/(1+x^2) and e^{-1/x^2} for the two scenarios described in the paragraph. But there isn't enough space there to cover any examples in detail. I think it's better pedagogy to state the theorem, and then explain properly what the limitations are. Sławomir Biały (talk) 12:53, 15 April 2011 (UTC)
- Well the motivation section is (and was) already saying things about Taylor's theorem that hasn't even been stated yet. Like "Taylor's theorem ensures that the quadratic approximation is... a better approximation than the linear approximation". It also says "Similarly, we get still better approximations to f if we use polynomials of higher degree"; although not literally attributed to Taylor's theorem, that is clearly the suggestion. And since that says twice "better approximation", which in a very natural sense of that term is not true, there is some reason to tone this down a bit. But the main point of the paragraph I added is not that Taylor's theorem does not say "better approximation", but that it cannot say such a thing. That makes sense even if no formulation is given yet, and alerts the reader to read closely what the theorem does say. Marc van Leeuwen (talk) 15:34, 15 April 2011 (UTC)
- Well, it may be appropriate to adjust the wording on the motivation, then. The intuition is that we expect higher order polynomials to give "better" approximations. That's kinda the point of a "motivation" section: to clarify the intuition behind the theorem Taylor's theorem gives the precise sense in which that is true. It's not surprising that the naive intuition misses some details, but I don't see the value in rubbing this in until the theorem is actually presented. Sławomir Biały (talk) 16:21, 15 April 2011 (UTC)
- I disagree (obviously). Though it is maybe a bit on the long side, I think it is crucially important to understand what Taylor's theorem sets out to do. Somewhere we all want to believe that Taylor polynomials approximate the function value outside the point a (I'm sure this is what Taylor wanted to believe) and only after having killed this hope as an impossible mirage can one start to appreciate what Taylor's theorem does have to say. Marc van Leeuwen (talk) 12:44, 15 April 2011 (UTC)
Relationship to Analyticity
I combined the stub section on complex analysis with the subsection "Relationship to analyticity" into a new section. I started writing something on the role of Taylor's theorem in complex analysis (which I believe is minimal), but it needs some more work. I will get back to it after some real life duties. I know there are too many details at this point, but they will be cut down. Lapasotka (talk) 21:08, 14 April 2011 (UTC)
- Looks good. Sławomir Biały (talk) 22:58, 14 April 2011 (UTC)
Now the section is at least in some shape. There is one screenful of "theory" which does not exist in Misplaced Pages in such a brief form. Should there be a mention about the behavior of the Taylor polynomials of the "innocent looking real analytic function"
at a=0 and r→1, and how it can be understood using complex analysis? Lapasotka (talk) 12:27, 15 April 2011 (UTC)
- The article should discuss how the Taylor approximation (or the error estimates) can get worse in higher order, maybe with the 1/(1+x^2) example compared and contrasted with a function like e^(-1/x^2). I think van Leeuwen's last paragraph could be spun out into a separate section dealing with these cases, maybe right before the analyticity section. What other counterintuitive limitations are there with Taylor's theorem? Sławomir Biały (talk) 12:35, 15 April 2011 (UTC)
Proof of the integral form
It looks like the proof of integral form of the remainder got lost somewhere. I think it had been moved to an earlier version of the "motivation" section. I've put the proof back in the article for now, but we can discuss removing it if that was the actual intention. Sławomir Biały (talk) 12:01, 22 April 2011 (UTC)
- I removed the subsection when I wrote the new Motivation section and added the crucial bit of the proof over there. There are quite a few proofs on this page right now and the whole may need some re-thinking. Perhaps collecting the one-variable proofs together in a shorter form and moving them into the last subsection of "Taylor's theorem in one real variable" would be a good idea. They are considerably more elementary than the rest of the article. I also think that a short derivation of the integral form of the remainder should appear in the Motivation instead of the special case of k=2, but I didn't want to react on your edit without further thought. It has some elements which I like a lot, but I don't believe that k=2 as a special case has any qualities why it deserves to be underlined. Lapasotka (talk) 20:52, 22 April 2011 (UTC)
- I think it's essential that the motivation section be proof-free. Our target audience should not be presumed to be interested in proofs, which is also why I think the proofs should be towards the end of the article. The primary application of Taylor's theorem, outside the rarefied world of pure mathematics, is to approximate actual functions, and have some reasonable control over the error. The k=2 case is important because it is easiest to explain in that case why matching second derivatives one intuitively expects a "better" approximation. So the reason for emphasizing this case in a "motivation" section seems clear. Most readers should be expected to have a reasonable grasp of the intuitive meaning of the second derivative, although it's less likely that they will have a great deal of competence with integration by parts. The earlier parts of the article should mostly emphasize the theorem and its applications. I agree that the proofs should be shortened and simplified as much as possible. Ideally, we should just be able to report the essential idea of each of the proofs without giving details. Sławomir Biały (talk) 22:14, 22 April 2011 (UTC)
Where is the actual statement of the theorem??
I was skimming through the article hoping for a precise formal statement of the theorem (with either the Lagrange or Cauchy remainder). I saw great writings introducing what the theorem is about. But where is the actual statement?? What are the remainders? For example, "for an n-times differentiable function f, f=T+R, where T is the Taylor series till the n-th term and R = f^(n)(c)*x^n/n!" The reason I post this is that I think a straight-forward statement with explicit formulae is what people coming to this page need the most. I understand that Taylor wasn't responsible for the discovery of the remainder term. But I argue that for students in math, the theorem is far from complete without the remainder term. Experts, please help! 冷雾 (talk) 06:24, 7 May 2011 (UTC)
- The section "Statement of the theorem" and the one immediately following it "Explicit formulae for the remainder" seem to be what you're after. Sławomir Biały (talk) 10:37, 7 May 2011 (UTC)
No referance
Schocking how there is no referance for the multivariable Taylor expansion. I added one for that.--Maschen (talk) 09:30, 4 December 2011 (UTC)
- As the one who originally placed the tag, the issue here is that the precise version of the theorem that we quote here does not seem to be easy to find in the literature. Most sources have an additional hypothesis, e.g., that the function have one additional degree of differentiability. The source you provided apparently even lacks a clear statement of the theorem. Moreover, what it does say is "Taylor's theorem" is clearly wrong for k-times differentiable functions, or even smooth functions. This is not a good reference for the article. I've restored the tag. Sławomir Biały (talk) 13:22, 4 December 2011 (UTC)
- To state the obvious: if no sources state the theorem in exactly the same way as Misplaced Pages currently does, maybe Misplaced Pages shouldn't state it in that form? Even if that means weakening the theorem slightly, if professional book authors think that a weaker form is significantly easier to understand, then I think we'd need a very good argument for this article to be any different. Plus, of course, there is the issue of correctness, bearing in mind that Misplaced Pages does not allow original research.
- On the first of those two issues (comprehensibility, rather than correctness), even ignoring other authors' opinions, I really hate the fact that at the moment the reader is forced to understand total derivatives even though the conclusion only uses partial derivatives. I actually simplified the statement (and weakened slightly) to the form seen in most textbooks with this revision, but was reverted (with the incorrect edit summary "this is wrong"). Perhaps we can get consensus to make a change like this, and perhaps also remove multi-index notation from the statement for even better accessibility? Quietbritishjim (talk) 18:51, 4 December 2011 (UTC)
- That edit asserted that continuous differentiability is equivalent to differentiability, which is indeed wrong. Sławomir Biały (talk) 19:49, 4 December 2011 (UTC)
- That is a good point. But a simple change from "k times differentiable at the point a" to "k times continuously differentiable at the point a" fixes that. Quietbritishjim (talk) 20:09, 4 December 2011 (UTC)
This is quite old now, but anyway... I disagree with Sławomir Biały above when he mentioned:
- "The source you provided apparently even lacks a clear statement of the theorem. Moreover, what it does say is "Taylor's theorem" is clearly wrong for k-times differentiable functions, or even smooth functions. This is not a good reference for the article. "
I have that book (Riley et al, 2010), and have checked the chapter (5) containing the theorem inside-out. Here is the theorem and surrounding text - straight from the textbook (§ 5.7 p. 161-162):
____________________
- "... The most general form of Taylor's theorem, for a function f(x1, x2, ... xn) of n variables, is a simple extension of the above . Although it is not necessary to do so, we may think of the xi coordinates in n-dimensional space and write the function as f(x), where x is a vector from the origin to (x1, x2, ... xn). Taylor's theorem then becomes:
- where are evaluated at . For completeness, we note that in this case the full Taylor series can be written in the form:
- where is the vector differential operator del, to be discussed in chapter 10."
____________________
so yes - it does provide a clear statement of the theorem (is this not Taylor's theorem? no?) in a way that can be used in practice, and no - it doesn't actually say "Taylor's theorem is clearly wrong for k-times differentiable functions, or even smooth functions". That book is full-on, 2-inches thick, at the undergraduate level (1st year through 4th). I simply fail to see why it's "unreliable"...
It’s quite hypocritical that you have a "large collection of famous treatises" and yet you are unable to find a reference for the theorem yourself (else it would be done it by now - right?)... Given the importance of the theorem, it should be easy to find a reference for at least one "clear" statement of the theorem in any form (even if it were not identical to the version in the article) in a collection of famous treatises, am I wrong?
In any case I am "clearly" wrong somewhere? So could anyone please point out my misunderstandings (if anyone has access to the book that would help, obviously not essential though). Thank you. =)
On a related note - I strongly agree with Quietbritishjim's statement:
- "To state the obvious: if no sources state the theorem in exactly the same way as Misplaced Pages currently does, maybe Misplaced Pages shouldn't state it in that form?"
I've seen many things reverted and deleted from lack of citations. Why are we including a form of the theorem which is not found easily in the literature, and leaving it to sit around with a "citation needed" template if it really is so hard to find a reference? The section should just be rewritten, and it hasn't even happened, the situation just doesn't add up...
By no means will I add this citation, just adding my view... F = q(E+v×B) ⇄ ∑ici 19:14, 25 May 2012 (UTC)
Categories: