Revision as of 22:22, 3 October 2011 editMelab-1 (talk | contribs)Extended confirmed users, Pending changes reviewers2,339 edits →How to calculate Deming regression← Previous edit | Revision as of 00:03, 4 October 2011 edit undoScsbot (talk | contribs)Bots240,614 edits edited by robot: adding date header(s)Next edit → | ||
Line 75: | Line 75: | ||
== Lebesgue integrals == | == Lebesgue integrals == | ||
Hello math buddies! I am interested in learning how to do Lebesgue integrals as an independent study project to extend my (fairly strong and rigorous) calculus knowledge. Unfortunately I have a very woeful set theory background so I need something that builds up to measure theory (from which Lebesgue is constructed) assuming little or no prior knowledge of set theory, ie by covering the set theory needed. I am not looking for something that will take me really deep into set theory because I anticipate remediating my lack of set theory in the near future, I'm looking for just enough so I can do Lebesgue. Can anyone recommend a work or work(s), again preferably not unnecessarily heavy on the other aspects of set theory or measure theory, and that has a nice amount of rigour but is still pretty relaxed so I can get through a lot of material quickly? Thanks a bunch :) ] (]) 21:20, 3 October 2011 (UTC) | Hello math buddies! I am interested in learning how to do Lebesgue integrals as an independent study project to extend my (fairly strong and rigorous) calculus knowledge. Unfortunately I have a very woeful set theory background so I need something that builds up to measure theory (from which Lebesgue is constructed) assuming little or no prior knowledge of set theory, ie by covering the set theory needed. I am not looking for something that will take me really deep into set theory because I anticipate remediating my lack of set theory in the near future, I'm looking for just enough so I can do Lebesgue. Can anyone recommend a work or work(s), again preferably not unnecessarily heavy on the other aspects of set theory or measure theory, and that has a nice amount of rigour but is still pretty relaxed so I can get through a lot of material quickly? Thanks a bunch :) ] (]) 21:20, 3 October 2011 (UTC) | ||
= October 4 = | |||
= October 3 = |
Revision as of 00:03, 4 October 2011
Welcome to the mathematics sectionof the Misplaced Pages reference desk. skip to bottom Select a section: Shortcut Want a faster answer?
Main page: Help searching Misplaced Pages
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Misplaced Pages:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
September 26
A bit on the Heavi Side
I have a problem to do with a differential equation involving a Heaviside Function. It goes : ){sin t, 0 ≤ t < 2π ; 0, 2π ≤ t } y(0) = 0 , y'(0) = 0
I have got it down to y(s) = (1 - e^(-2πs))/(s^2+ 1)(s^2+ 4 ) = 1/3(s^2+ 1) - 1/3(s^2+ 4) - ( e^(-2πs))/3(s^2+ 1) + ( e^(-2πs))/3(s^2+ 4) , but now I do not know how to convert the exponential terms into funπctions of t, while I know the ones without the exponentials are variants of sin(t). If anyone knows how to work this out, then I would be pleased to hear that. Thank You. Chris the Russian Christopher Lilly 07:18, 26 September 2011 (UTC)
- It appears you are trying to do this with Laplace transforms. (There are other ways.) An exponential factor in the s domain corresponds to a shift operation in the t domain; search for "Frequency shifting" in the Laplace transforms article. So basically just find the inverse transform without the exponential factor and then shift the function to the right by 2π. In the solution I got, y is 0 for y>2π so the shifted components should cancel out the non-shifted ones except on the interval .--RDBury (talk) 13:08, 26 September 2011 (UTC)
Thank You. I hope I made this clear that that which is in the curly brackets was a step function, but I suppose that is obvious with the title Heaviside. I wondered why sin ( t - 2π ) should be any different from sin (t) itself. If I took the equation to be on its own, I can work that out, but to be honest, I could not understand how a function could change like symbolically. I get the graph, like a switch or something going on, then all of a sudden the value of some function jumps or suddenly appears different, but it still seems hard to fathom.
I do have a different problem involving an integro differential equation. It goes :
Now I was advised that the assumption is that , and to isolate the integral by rearranging the equation, such that I got :
Next I was told to take the derivative of both sides, and was told that this meant that since
that
But when I carry out the integration on , according to the Fundamental Theorem of Arithmetic, I get : , or have I done the wrong thing, since I thought that taking the derivative of an integral is where they cancel each other out. But also, if it is assumed that
- , what is
- ?
If I assume that, then I think I get since if , then , as stated to begin with. If I do say that , this seems correct, since it says that , and indeed , and since if \qquad \frac{d}{dt}y(0) = 0, being the first derivative, so will be the second, since that will just be the derivative of zero, so this seems right.
To solve this, I tried LaPlace once more, and got
implies that , so that Laplace of = Laplace of , which means after carrying out all the steps that Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \y (s) } = s/(s^2 + 1 )^2 + s/(s+1), which I thought was cos(t) + t sin(t), but it does not seem to work when I check it. I certainly would appreciate any help on this, thank You. Chris the Russian Christopher Lilly 07:24, 27 September 2011 (UTC)
Trig identity and complex numbers.
I tried applying the following trig identity to the right hand side of Euler's Formula . Interestingly, one gets , but simplifying the square root gives , which can't be true. Any insight as to why this identity fails here? I also found through wolframalpha.com that . It almost seems as if it's trying to "cancel out" the zero with an infinity, though doesn't really make complete sense since it's composed inside another function. — Trevor K. — 16:55, 26 September 2011 (UTC) — Preceding unsigned comment added by Yakeyglee (talk • contribs)
- In your identity substitute a=b=1, c=0 to get which is not true. Bo Jacoby (talk) 17:20, 26 September 2011 (UTC).
- As Bo points out, your formula is a bit off. See List of trigonometric identities#Linear combinations for the right identity. But the question still stands: why does that one still seemingly fail? If you work through a proof of the identity, you'll see that you end up dividing by zero at some point. So, strictly speaking, the identity should come with a disclaimer about what values of a and b are allowed. Your comment about the zero and infinity "trying to cancel each other out" is on the right track: see L'Hopital's_rule. 130.76.64.109 (talk) 18:02, 26 September 2011 (UTC)
The L'Hopital part could have been true, but I don't think that's what happening here. Take , then you have the same mess with the arctan, but without the multiplication by 0. -- Meni Rosenfeld (talk) 18:26, 26 September 2011 (UTC)
- As Bo points out, your formula is a bit off. See List of trigonometric identities#Linear combinations for the right identity. But the question still stands: why does that one still seemingly fail? If you work through a proof of the identity, you'll see that you end up dividing by zero at some point. So, strictly speaking, the identity should come with a disclaimer about what values of a and b are allowed. Your comment about the zero and infinity "trying to cancel each other out" is on the right track: see L'Hopital's_rule. 130.76.64.109 (talk) 18:02, 26 September 2011 (UTC)
- Should be (note argument of arctan). But that's not the problem. The problem is that some identities developed for real numbers only work because we have sloppily assumed that squares are always nonnegative (in particular, wherever the square root symbol appears it should raise red flags with regards to extension to complex numbers. With complex numbers you also need to worry about branches and stuff). The generalization of squaring in its capacity as something nonnegative to real matrices is and to complex numbers is . If some version of the identity works for complex numbers, I'll bet it involves the factor . -- Meni Rosenfeld (talk) 18:12, 26 September 2011 (UTC)
- That equation still doesn't work with a=0, c=-1. Probably the most concise formula that actually works uses the atan2 function.--RDBury (talk) 19:34, 26 September 2011 (UTC)
- It's true that complex numbers sometimes break the rules we learned with real numbers, Meni, but this one does work (assuming you mind the quadrants, as RDBury points out). 130.76.64.119 (talk) 21:27, 26 September 2011 (UTC)
- That equation still doesn't work with a=0, c=-1. Probably the most concise formula that actually works uses the atan2 function.--RDBury (talk) 19:34, 26 September 2011 (UTC)
It's easier to see what is going on here, if you approach this using complex numbers from the start (trigonometry shouldn't be taught before complex numbers). For real a and b, we have:
a cos(x) + b sin(x) =
a/2 + b/(2i) =
(a - b i)/2 exp(i x) + (a + bi)/2 exp(-i x) =
|a + b i| cos(x + phi)
So, the trig identity works because you have an expression involving exp(i x) and exp(-ix). While they are multiplied by complex numbers of equal modulus, you can generalize it. For arbitrary a and b not equal to zero, we can write:
a exp(i x) + b exp(-i x) =
a exp + b exp =
a exp(-ip) exp + b exp(i p) exp
You then choose p such that
|a exp(-ip)| = |b exp(i p)|
If |a| is not the same as |b|, then we can't choose p real, but that's not a problem. We can always choose p = 1/(2 i) Log(a/b) to make both coefficients equal, but of course, both a and b have to be nonzero.
The fundamental issue here is that if you only have exp(i x), you can't magically get exp(-ix) out of nowhere. Count Iblis (talk) 21:49, 26 September 2011 (UTC)
- What do you mean? Just compute . Bo Jacoby (talk) 07:37, 27 September 2011 (UTC).
September 27
Multiply 3-dimensional matrices
I was wondering if there was a defined method for this sort of thing. I would guess that it would be something like this:
If you had a 3x3x3 matrix A, with these slices, (looking at it straight on going from front to back)
And these slices looking at it from the right, from back to front (same matrix)
You would do something like this to multiply it by itself:
I don't know if this works (I just extended the rows and columns to 3x3 matrices) or if you need to multiply even more matrices, but I was wondering if you guys had any thoughts? Aacehm (talk) 00:40, 27 September 2011 (UTC)
- You may be looking for tensors. Bobmath (talk) 02:34, 27 September 2011 (UTC)
- Agree with Bobmath. There is a tensor product which works differently from the scheme you present above. You could make up your own product rule for 3-dimensional matrices but it might not be useful for anything. Tensors can be given a physical interpretation. See Tensor#Applications. EdJohnston (talk) 02:46, 27 September 2011 (UTC)
Easy way to judge if a holomorphic function is an injection and/or has no branch points
Suppose I have a function that I know is holomorphic almost everywhere, and I know its power series expansion about some point, but further that this series doesn't converge everywhere and thus the function has singularities. Is there an easy way to judge: (a) whether the function is an injection, or otherwise; and (b) if the function has a branch point. The reason I'm interested is that I'd like to try it for a global-optimization procedure (inverting the series, which obviously will work properly only if the function is injective; I think the branch point stuff could mess with it as well): as far as I know, Newton-type methods are local-optimization ones, not global. Thanks. --Leon (talk) 06:06, 27 September 2011 (UTC)
- OR, and excuse me if my moment of realization is but a moment of madness, does a function not being an injection imply that its inverse will have a branch point, and vice-verse; and further, will this method only work with bijective functions, those being but the Möbius transformations?
- In any case, are there any rules relating the presence of singularities/derivatives equal to zero/series expansions at particular points? To cut to the chase, is there any mileage to be had in my suggested method of finding a global minimum?--Leon (talk) 06:21, 27 September 2011 (UTC)
Are polynomial functions invertible
I apologize if it is too trivial a question, but is an arbitrary polynomial invertible as a function if the domain is suitably restricted. It seems to me they are, but I cant quite justify it. Can someone explain why? Thanks -Shahab (talk) 07:27, 27 September 2011 (UTC)
- Doesn't the fundamental theorem of algebra state that, over the complex plane, every polynomial function is a surjection? If so, every polynomial function has an inverse, albeit, in general, a non-unique one, and thus the most general inverse will be a many-valued function (so not strictly a function).--Leon (talk) 08:02, 27 September 2011 (UTC)
- Strictly speaking, the FToA does not state, but implies that. Although it is a relatively trivial implication. — Fly by Night (talk) 20:14, 29 September 2011 (UTC)
- (Assuming a real domain:) Polynomials only have a finite number of critical points. Near any other point you can restrict the domain so that there is a smooth inverse using the inverse function theorem. Staecker (talk) 11:42, 27 September 2011 (UTC)
- (Over the complex plane:) Polynomials only have a finite number of critical points. The derivative of a polynomial is a polynomial. — Fly by Night (talk) 19:52, 7 October 2011 (UTC)
- Polynomial maps (as opposed to functions) are incredible when the Jacobian is nonzero in dimensions 1 and 2. The higher dimensional case is believed to be true, and is known as the Jacobian conjecture.Sławomir Biały (talk) 11:14, 28 September 2011 (UTC)
ultrafilters
Can somone give me a example to an ultrafilter which is not principal? — Preceding unsigned comment added by 93.173.34.90 (talk) 10:37, 27 September 2011 (UTC)
- Yes, but it will require some form of the axiom of choice, meaning it won't be easy to visualize. Are you familiar with Zorn's lemma? From that, you can show that every filter is contained in a maximal filter, and it's not hard to see that a maximal filter is an ultrafilter. So begin with the filter of all subsets of N which have finite complement (it's not hard to check that this is a filter on N). Then by Zorn, there is an ultrafilter containing this filter. Since it contains all cofinite sets, it cannot contain any finite sets, so it cannot be principle.--Antendren (talk) 10:46, 27 September 2011 (UTC)
In a sense, nobody can show you such an example. The set of all cofinite subsets of an infinite set is a filter. Now look at some infinite subset whose complement is also infinite. Decide whether to add it or its complement to the filter. Once you've included it, all of its supersets are included and all subsets of its complement are excluded, and all of its intersections with sets already included are included, and all unions of its complement with sets already excluded are excluded, etc. Next, look at some infinite subset whose complement is also infinite and that has not yet been included or excluded, and make the same decision. And keep going....."forever". That last step is where Zorn's lemma or the axiom of choice gets cited. Michael Hardy (talk) 17:32, 27 September 2011 (UTC)
I see. Thank you Both! — Preceding unsigned comment added by 93.173.34.90 (talk) 17:41, 27 September 2011 (UTC)
Explicit Runge-Kutta methods
What is the highest possible order of an explicit Runge-Kutta method? --84.62.204.7 (talk) 20:27, 27 September 2011 (UTC)
What is the highest order of a known explicit Runge-Kutta method? --84.62.204.7 (talk) 12:58, 28 September 2011 (UTC)
Quick question on exponentials
Hi wikipedians: I'm no mathematician, and I came across a formula in a paper I'm reading that I can't make sense of. Could someone help me with this? It says that for small values of p:
(1-p)^N ≈ e^(-Np)
Why is this? Any help would be appreciated. I don't have a digital copy of the paper or I would post a link. Thanks! Registrar (talk) 21:12, 27 September 2011 (UTC)
- Because . The approximation step simply replaces the log function with a tangent line. 130.76.64.109 (talk) 21:20, 27 September 2011 (UTC)
- Alternatively, if you expand with the binomial theorem, the first two terms are . All the rest have in them, so since is small, the remaining terms are tiny. Simultaneously, the first two terms of the power series for are , so plugging in for that gives .--Antendren (talk) 21:25, 27 September 2011 (UTC)
Thanks both of you! The theory behind the first explanation isn't perfectly clear to me, but I can see from graphing that it works. The second explanation makes perfect sense. So thanks very much. Registrar (talk) 21:37, 27 September 2011 (UTC)
- Glad you're happy. Note that the second explanation depends on . For p=0.01 and N=200, and , but . 130.76.64.121 (talk) 22:36, 27 September 2011 (UTC)
The approximation is actually better than either explanation suggests, because of the fact that
or in other words,
Series
Under what circumstances is this equality always true: ? Do the series and have to be absolutely convergent or just convergent? Widener (talk) 21:30, 27 September 2011 (UTC)
If the limits for N to infinity of both exist (and these limits are, by definition, the summations to infinity), then because the sum of the limits is the limit of the sum, the summation to infinity of the sum of the two function is equal to the sum of the two summations to infinity. Count Iblis (talk) 22:47, 27 September 2011 (UTC)
You can also use this in case of divergent summations. Suppose e.g. that is convergent and we write , but both and are divergent. Then define the functions:
If f(z) can be analytically continued to the entire complex plane, then h(z)= f(z) + g(z) and you can put z = 1 in here, despite the series for f(z) and g(z) not converging there. If f(z) and g(z) have poles at z = 1, then you can evaluate h(1) by computing the limit of f(z) + h(z) for z to 1. Count Iblis (talk) 23:16, 27 September 2011 (UTC)
September 28
Power Spectrum Estimation
I have some measurements and I am trying to get a hold of its power spectrum. The measurements are discrete so the estimation of the power spectrum I get is also discrete. My question is, what is a good method to interpolate the values in between the frequencies for power spectrum estimation? I only need interpolation, no extrapolation. This is just an opinion question to see what others more experienced around here think. For example, I may only get 128 frequencies so should I use just linear interpolation, nearest neighbor method, cubic splines, etc? Cubic splines are very sensitive to the boundary conditions but will be very smooth. Which b.c. make sense here anyway (natural, clamped, etc.)? On the other hand linear interpolation is faster and that is what we all see when we plot it (graphing calculators just linearly interpolate between the nodes). Nearest neighbor is dumb but is the fastest and may be justified if my frequencies are very close-by (very small df). I am using Welch's method so I am trying to use a small window (to take more averages and lower the variance) but that lowers my resolution. What do you guys think? Thanks! - Looking for Wisdom and Insight! (talk) 05:03, 28 September 2011 (UTC)
- Why interpolate at all? What are you going to use the interpolated values for? Looie496 (talk) 05:52, 28 September 2011 (UTC)
- Crudely, couldn't you just approximate each integral by a Riemann sum based on your sample? Sławomir Biały (talk) 11:37, 28 September 2011 (UTC)
- The typical way of getting an interpolated power spectrum is a technique called zero-padding. After windowing (or whatever), take each set of data, remove the mean and extend the data set by adding a large number of zeros on the end. Those zeroes will contribute nothing to the Fourier sum / integral, and so they don't affect the values you receive, but it will force the discrete Fourier transform to return values at intermediate points. On platforms that implement the fast Fourier transform algorithm, there is a computational advantage to choosing the total number of points to be an exact power of 2. The power spectrum produced in this way will be smoothed in a natural fashion, but remember that smoothing doesn't add any information beyond what you already had. For example, one needs to know that the uncertainty in peak locations is still comparable to the spacing of frequencies in the original transform. Dragons flight (talk) 17:15, 28 September 2011 (UTC)
Integral Truths and mein eigenes Problem
I have a problem involving an integro differential equation. It goes :
Now I was advised that the assumption is that , and to isolate the integral by rearranging the equation, such that I got :
Next I was told to take the derivative of both sides, and was told that this meant that since
that
But when I carry out the integration on , according to the Fundamental Theorem of Arithmetic, I get : , or have I done the wrong thing, since I thought that taking the derivative of an integral is where they cancel each other out. But also, if it is assumed that
- , what is
- ?
If I assume that, then I think I get since if , then , as stated to begin with. If I do say that , this seems correct, since it says that , and indeed , and since if \qquad \frac{d}{dt}y(0) = 0, being the first derivative, so will be the second, since that will just be the derivative of zero, so this seems right.
To solve this, I tried LaPlace once more, and got
implies that , so that Laplace of = Laplace of , which means after carrying out all the steps that
- ,
which I thought was
- ,
but it does not seem to work when I check it. Now I had two different books giving two different results, such that in one of them my answer is , but in another it is given as
- , which does actually work.
I also have a second one involving the inverse power method for finding the eigenvalues of a four by four matrix, and how to programme MatLab™ into doing what we want.
We programmed a 4 × 4 into MatLab™ for loops to allow it to give us the array showing the four eigenvalues of the matrix, and now we need a way to carry out the inverse power method to verify the middle two. We had :
% Powers of matrices n = 4;
% pick a starting vector and a matrix x0 = ones(n,1); B = P = transpose(B) A = B*P = eig(A) pause
% first the dominant one x = x0; for I = 1:10
x = A*x; x = x/norm(x);
end v1 = x l1 = dot(x,A*x)/dot(x,x) pause
% then the other end of the spectrum x = x0; B = A-402.2821*eye(n); for I = 1:40
x = B*x; x = x/norm(x);
end v2 = x l2 = dot(x,A*x)/dot(x,x)
C = inv(A-402.2821*eye(n));
for I = 1:40
x = C*x; x = x/norm(x);
end
This gave us :
B =
2 1 7 6 2 0 5 6 7 8 8 3 9 6 3 4
P =
2 2 7 9 1 0 8 6 7 5 8 3 6 6 3 4
A =
90 75 96 69 75 65 72 57 96 72 186 147 69 57 147 142
V =
0.6704 0.0834 0.6186 0.4012 -0.6837 -0.3143 0.5742 0.3226 -0.2244 0.6642 -0.2735 0.6585 0.1810 -0.6731 -0.4614 0.5489
D =
0.0038 0 0 0 0 15.0123 0 0 0 0 65.7018 0 0 0 0 402.2821
v1 =
0.4012 0.3226 0.6585 0.5489
l1 =
402.2821
v2 =
-0.5461 0.7183 -0.2871 0.3216
l2 =
6.9144
And this last bit was my attempt to find one of the middle values, that is, to do it by hand and confirm it was 15.0123 , and or the other middle one 65.7018. I am not sure what we were to do to get the computer to work these out.Thank You. Chris the Russian Christopher Lilly 05:38, 28 September 2011 (UTC)
- Please give your two questions separate headerlines such that they can be answered separately. Otherwise this is going to be messy. Bo Jacoby (talk) 07:31, 28 September 2011 (UTC).
- Is it me or is there a minus missing in line 2, where the integral has been moved to the right hand side? Grandiose (me, talk, contribs) 17:23, 28 September 2011 (UTC)
Faith and Begorrah ! You are right - I cannot understand how I got that wrong ! This is the trouble with trying to rearrange equations. Although in this case it was a typo on my part only in writing it all down for this question on Misplaced Pages™ . This means then that
- which means
- ,
and this is what I had anyway, so the only mistake I made was a typo for the first rearrangement, but other than that, since the mistake was not repeated with the differentiation, it all worked out. This I have since solved to my satisfaction, intending to use the solution
- , which, as stated, does actually work. The only problem I have now is simply understanding why the integral becomes the function
upon differentiation, and what the differentiation of an integral really means.
As for the computer one, sorry to put two on the same thing, but there it is. We are trying to find out how to find the eigenvalues of the matrix with the MatLab™ , and how to use the inverse power method to do so, or that is, how to find all four. Thank You.Chris the Russian Christopher Lilly 22:36, 28 September 2011 (UTC) — Preceding unsigned comment added by Christopher1968 (talk • contribs) 22:31, 28 September 2011 (UTC)
polar range
Why do we need define a range for polar cordinate system?Exx8 (talk) —Preceding undated comment added 10:49, 28 September 2011 (UTC).
- I think it's because a polar representation is otherwise non-unique. If the tuple for real represents a Cartesian coordinate, the set of such tuples maps to bijectively. If it is interpreted as a polar coordinate, for radius x and angle y, for all integer .--Leon (talk) 16:41, 28 September 2011 (UTC)
- I'm not sure that your notation's right there. In polar coordinates, the point is the same as the point for each whole number k. You can go around another turn and get back to where you started. (Like 6 a.m. and 6 p.m. have the same representation on an analogue clock, even though the hour hand has made a full turn extra when it's 6 p.m.) We sometimes specify a range for the angle θ but we don't have to. It's often best not to, and it doesn't require much more work. For example, the point in cartesian coordinates is given by , where k is any whole number. — Fly by Night (talk) 21:00, 28 September 2011 (UTC)
collide with earth?
earth orbits around the sun; relative to the speed conjunction of other comets and asteroids will one near moon size ever collide with earth? by mathematics calculation; the the eclipse ellipse of other large masses and earth over time how will it project over 3 light years in very high speed animation? (despite the sun become a red giant) — Preceding unsigned comment added by 207.6.211.175 (talk) 19:32, 28 September 2011 (UTC)
- That's unlikely unless the solar system enters a new phase of instability, similar to when Jupiter and Saturn reached a 2:1 orbital resonance, triggering the Late Heavy Bombardment A possible scenario involves Mercury, whose orbital parameters aren't far from a region of instability. Given the current state of the solar system, taking into account the fact that it is chaotic on a time scale of tens of millions of years, one can show that Mecury can collide with Venus or fall into the Sun within a billion years. When that happens, the solar system will reconfigure itself, which can e.g. involve Mars being ejected from the solar system, or possibly other catastrophic events. Count Iblis (talk) 20:09, 28 September 2011 (UTC)
- Wait, the solar system is chaotic already within tens of millions of years? Then what's the deal with everyone trying to predict whether when the Sun turns to a red giant (much later) it will eat Earth or not? If the orbits are so chaotic, the problem is not the Sun, we don't even know where Earth will be at that time. – b_jonas 10:17, 29 September 2011 (UTC)
- The general features of the Earth's orbit are fairly stable. It's chaotic in as much as we can't really determine where in its orbit the Earth will be in 10 million years, but we can be reasonable sure its orbit won't be significantly different. The only exception is close interactions with other large bodies, like Count Iblis describes, but I don't think they are considered particularly likely. --Tango (talk) 12:00, 3 October 2011 (UTC)
- That's what I thought, but then Count Ibis mentioned a possibility of Mars being ejected from the Solar System, which sounds rather scary. – b_jonas 20:06, 4 October 2011 (UTC)
- The general features of the Earth's orbit are fairly stable. It's chaotic in as much as we can't really determine where in its orbit the Earth will be in 10 million years, but we can be reasonable sure its orbit won't be significantly different. The only exception is close interactions with other large bodies, like Count Iblis describes, but I don't think they are considered particularly likely. --Tango (talk) 12:00, 3 October 2011 (UTC)
- Wait, the solar system is chaotic already within tens of millions of years? Then what's the deal with everyone trying to predict whether when the Sun turns to a red giant (much later) it will eat Earth or not? If the orbits are so chaotic, the problem is not the Sun, we don't even know where Earth will be at that time. – b_jonas 10:17, 29 September 2011 (UTC)
epsilon-delta
hey, it's me again. So today I was doing some volunteer tutoring for younger students. One student asked me a question about the derivative of at 0: he asked since theoretically it doesn't exist, why does his graphing calculator give a value for the derivative there. I explained that his calculator doesn't differentiate the way we do, but calculates a bunch of for very small ∆x; I was wondering idly how I would turn this into a rigorous (ε,δ) argument; this is what I have:
Suppose ε is the smallest value >0 that your calculator can handle, and also suppose this value is the same when it calculates your Δx and Δy. Then your calculator is saying that for every ε'≥ε, as long as , for some δ≥ε (I know this isn't how a calculator actually thinks), but the calculator can't choose any epsilon less than ε, so as long as it's true for ε',δ≥ε, the calculator says it exists. I know this is roughly right but it's not really clear and not really specific. Can someone please help me refine this argument (keeping to epsilon's and delta's though I'm sure there are plenty more informal you could do it)? — Preceding unsigned comment added by 24.92.85.35 (talk) 22:51, 28 September 2011 (UTC)
- I imagine the calculator isn't doing anything of the sort but is using an estimate for the derivative using a formula from numerical analysis. A typical method to estimate f′ (a) would be to find (f(a+e)-f(a-e))/2e which would have error, assuming f is reasonably well behaved, proportional to e. The fact that f is not well behaved is, I would guess, why the calculator is giving an answer instead of a divide by zero error. What is actually happening though is anybodies guess since the calculator is running proprietary software written by programmers with confidentiality agreements. Symbolic algebra systems exist that can do this kind of thing exactly and it probably won't be too long before calculators include such features, assuming they don't already.--RDBury (talk) 04:02, 29 September 2011 (UTC)
October 1
Solving cos(ax) = cos(bx) generally
Hi,
Is there a formula for solving cos(ax) = cos(bx) generally? WolframAlpha seemed to be able to come up with pretty exact results. See http://www.wolframalpha.com/input/?i=cos%289x%29%3Dcos%287x%29. 76.104.20.125 (talk) 05:09, 1 October 2011 (UTC)
- The equation is equivalent to ax = 2πn ± bx. Looie496 (talk) 05:46, 1 October 2011 (UTC)
- Assuming that n is an integer then the ± is redundent here because cosine is an even function, i.e. cos(x) = cos(–x) for all x. — Fly by Night (talk) 17:39, 3 October 2011 (UTC)
moving an article from my priviate place to the public
i've written an article that appears when i link on the tab with my user name (ldecola) and want to make it public. — Preceding unsigned comment added by Ldecola (talk • contribs) 20:19, 1 October 2011 (UTC)
- That's not hard to do, but can I suggest that you work on it a bit more before moving it to article space? At the moment all it does is state a few vague generalities, and even those are not verifiable, because the only reference given is an entire book, with no indication of where in the book those facts come from (if they come from the book at all). I fear that in its current state, somebody would be likely to nominate it for deletion -- you can avoid that by fleshing it out a bit more. Looie496 (talk) 03:38, 2 October 2011 (UTC)
- You could also talk to the nice people at WP:WPM |Aacehm| 18:13, 2 October 2011 (UTC)
October 2
Scrabble probabilities
I was playing a game of regulation rules Scrabble earlier today, and was led to contemplate the question: What is the likelihood that throughout the game I will possess every letter of the alphabet at least once? Clearly this depends upon my play strategy, my knowledge of the English language, and the English language itself. For simplicity, assume I play an average of L (1 ≤ L ≤ 7) letters per turn, and that I'm a reasonably good player (at least half of the letters I play each turn score above the average of the points available to me from the sum of the tiles). Also assume my opponent plays exactly as I do. There's a lot of variables here, and I'm not sure if my stipulations are reasonable or complete; please fill in the blanks if I've missed something. But the essential question is what intrigues me, and perhaps it's actually too complex for meaningful computation. Thanks for the input. —Anonymous Dissident 10:37, 3 October 2011 (UTC)
- By symmetry, I suppose it stands to reason that I'd have 50 tiles throughout the course of the game. That probably makes things significantly simpler. —Anonymous Dissident 10:49, 3 October 2011 (UTC)
- It looks like a form of Hypergeometric distribution. But I'm having problems wrapping my enfeebled brain around the maths :( --Tagishsimon (talk) 11:12, 3 October 2011 (UTC)
- If you assume both players are equal, then it is natural to make the approximation that you both have an equal chance of encountering any tile, and that the odds of doing so are solely determined by the letter frequency. You each get 50 draws. Based on Scrabble letter distribution and a quick Monte Carlo I get that probability of encountering every letter (not including blanks) is about 1 occurrence for every 1700 games. What really hurts your odds are the five singleton tiles. You'll miss at least one of them more than 96% of the time. Dragons flight (talk) 11:17, 3 October 2011 (UTC)
- Incidentally, if you let blanks count towards missing letters, you can "encounter" every letter in one game out of every 75. Dragons flight (talk) 11:25, 3 October 2011 (UTC)
- Umm. Having found a hypergeometric calculator, I get the probability that you'll draw all five singletons as 0.028. IINAM, &c. --Tagishsimon (talk) 11:26, 3 October 2011 (UTC)
- Incidentally, if you let blanks count towards missing letters, you can "encounter" every letter in one game out of every 75. Dragons flight (talk) 11:25, 3 October 2011 (UTC)
Hypergeometric distribution wtf
As a follow-up to the scrabble question above, would someone be good enough to explain how this formula from Hypergeometric distribution works - specifically, the step from the bracketed numbers to the 5.18... / 1027... thanks --Tagishsimon (talk) 11:32, 3 October 2011 (UTC)
-
- the dot between 5 and 8145060 indicates they are multiplied, it's not a decimal point. Apart from that it's just substituting in the values for the binomial coefficients and doing the sum.--JohnBlackburnedeeds 12:07, 3 October 2011 (UTC)
- Umm, sorry, I'm still not getting it. How does the 50 10 bracketed number equal 1027... it's the sum that I cannot do :( --Tagishsimon (talk) 12:18, 3 October 2011 (UTC)
- the dot between 5 and 8145060 indicates they are multiplied, it's not a decimal point. Apart from that it's just substituting in the values for the binomial coefficients and doing the sum.--JohnBlackburnedeeds 12:07, 3 October 2011 (UTC)
Why do we have odd and even numbers
It seems obvious to me that some properties of numbers in maths must exist for the system to work. 10 must be larger than 6 for our number system to have meaning, and I've heard somewhere that certain things are impossible without the concept of the number 0. But why were odd and even numbers...invented? And at the very least, why are they still taught? I don't see what the concept gains us, or what it does at all. Thanks. Prokhorovka (talk) 14:39, 3 October 2011 (UTC)
- Odd and even properties are used a lot. For example, I was working on a method in a paper that had a step in which if width of the bast was even, it had to be divided by two. If it is was odd, it shouldn't be divided. It could have stated "if the width of the base if evenly divisible by two", but stating "if the width of the base is even" is shorter. -- kainaw™ 14:45, 3 October 2011 (UTC)
- (edit conflict) It's because lots of things work either one way or another, depending on whether some number is even or odd. For example, is 1 if n is even, or −1 if n is odd. Negative numbers have real odd roots (cube roots, fifth roots, seventh roots, and so on), but do not have real even roots (square roots, fourth roots, sixth roots, and so on). A polynomial always has a real zero if its degree is odd, but might not if its degree is even. The graph of a power function is symmetric about the y-axis if the power is an even integer, or symmetric about the origin if the power is an odd integer. A few examples from graph theory: A cycle graph has a perfect matching if it has an even number of vertices, but not if it has an odd number of vertices. A graph is bipartite if and only if it contains no cycles of odd length (even-length cycles don't matter). A connected graph has an Eulerian circuit if and only if the degree of every vertex is even. And so on. This dichotomy between even and odd is ubiquitous in mathematics. See Parity (mathematics) for more. —Bkell (talk) 14:58, 3 October 2011 (UTC)
- For a more "real-world" example, in many parts of the United States at least, houses with odd numbers are on one side of the street, and houses with even numbers are on the other side, so if you understand the difference between odd and even numbers you know which side of the street to look at when searching for an address. A similar scheme is often used for office numbers in large buildings, with even-numbered offices on one side of a corridor and odd-numbered offices on the other side. —Bkell (talk) 15:07, 3 October 2011 (UTC)
How to calculate Deming regression
Can I have the expressions used to determine the "a", "b", and the correlation coefficient of a data set using Deming regression? --Melab±1 ☎ 19:17, 3 October 2011 (UTC)
- The formulas, except for the correlation coefficient, can be found in our article on Deming regression. Looie496 (talk) 21:52, 3 October 2011 (UTC)
- It is not clear to me which ones are which. --Melab±1 ☎ 22:22, 3 October 2011 (UTC)
Given 2 lines that cross in a multidimensional space, how to find the angle?
Hi, I have 2 points in a 50-dimensional space. If I have the straight lines that join those points with the origin, is there a way to find the angle between them? — Preceding unsigned comment added by 190.226.26.168 (talk) 20:58, 3 October 2011 (UTC)
- The dot product, which works in all dimensions not just two or three. See in particular the definition and geometric interpretation.--JohnBlackburnedeeds 21:15, 3 October 2011 (UTC)
Lebesgue integrals
Hello math buddies! I am interested in learning how to do Lebesgue integrals as an independent study project to extend my (fairly strong and rigorous) calculus knowledge. Unfortunately I have a very woeful set theory background so I need something that builds up to measure theory (from which Lebesgue is constructed) assuming little or no prior knowledge of set theory, ie by covering the set theory needed. I am not looking for something that will take me really deep into set theory because I anticipate remediating my lack of set theory in the near future, I'm looking for just enough so I can do Lebesgue. Can anyone recommend a work or work(s), again preferably not unnecessarily heavy on the other aspects of set theory or measure theory, and that has a nice amount of rigour but is still pretty relaxed so I can get through a lot of material quickly? Thanks a bunch :) 187.115.202.178 (talk) 21:20, 3 October 2011 (UTC)