Misplaced Pages

:Reference desk/Mathematics: Difference between revisions - Misplaced Pages

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
< Misplaced Pages:Reference desk Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 18:33, 19 December 2013 edit79.115.133.61 (talk) Proving That Ln'(x) = {{sfrac|1|x}} Without Using e = (1 + {{sfrac|1|n}})n← Previous edit Revision as of 18:43, 19 December 2013 edit undo86.176.211.137 (talk) Proving That Ln'(x) = {{sfrac|1|x}} Without Using e = (1 + {{sfrac|1|n}})nNext edit →
Line 71: Line 71:
::::::It's true you can do that, but it's horrible and arbitrary, and you then have the problem of showing that all the infinitely many ways you can do it converge to the same number. So much nicer to use exp()! ] (]) 18:11, 19 December 2013 (UTC) ::::::It's true you can do that, but it's horrible and arbitrary, and you then have the problem of showing that all the infinitely many ways you can do it converge to the same number. So much nicer to use exp()! ] (]) 18:11, 19 December 2013 (UTC)
:::::::Convergence is not a problem, since for each term the exponent of x is bound (0...9), while the order of the radical decreases exponentially (10<sup>k</sup>). Or by using the squeeze theorem for each term of the form <math>x^\frac0b<x^\frac ab<x^\frac9b</math>. — ] (]) 18:30, 19 December 2013 (UTC) :::::::Convergence is not a problem, since for each term the exponent of x is bound (0...9), while the order of the radical decreases exponentially (10<sup>k</sup>). Or by using the squeeze theorem for each term of the form <math>x^\frac0b<x^\frac ab<x^\frac9b</math>. — ] (]) 18:30, 19 December 2013 (UTC)
::::::::Yeah, you can do it, but I still think as a definition it's ugly and arbitrary (arbitrary since you can write it in infinitely many ways, and there is no reason to choose one rather than the other). 18:43, 19 December 2013 (UTC)

::{{ec}}Why make life unnecessarily hard? It is rather coincidental really that the derivative of that particular power series is itself. The proof of that is simple, since e is equal to the power series on the right evaluated at x=1 (remember that <math>a^b=\exp(b*\ln(a)) </math> - the definition of the natural logarithm function means ln(a)=1 when a=e). Don't bother trying to expand the expression on the left, you don't even know how many terms you are adding up. (for this I'm assuming that the expression on the left means <math>\Pi\left(\sum\tfrac1{n!}\right)</math>).--] ] 02:19, 19 December 2013 (UTC) ::{{ec}}Why make life unnecessarily hard? It is rather coincidental really that the derivative of that particular power series is itself. The proof of that is simple, since e is equal to the power series on the right evaluated at x=1 (remember that <math>a^b=\exp(b*\ln(a)) </math> - the definition of the natural logarithm function means ln(a)=1 when a=e). Don't bother trying to expand the expression on the left, you don't even know how many terms you are adding up. (for this I'm assuming that the expression on the left means <math>\Pi\left(\sum\tfrac1{n!}\right)</math>).--] ] 02:19, 19 December 2013 (UTC)
::: Side note: a flower-like symbol of a star is not the best choice for multiplication. We have no better choice in ASCII, but in LaTeX ther are also '''\cdot''' and '''\times''', usually better readable than a star – compare to <math>b \cdot \ln(a), b \times \ln(a)</math>. --] (]) 08:07, 19 December 2013 (UTC) ::: Side note: a flower-like symbol of a star is not the best choice for multiplication. We have no better choice in ASCII, but in LaTeX ther are also '''\cdot''' and '''\times''', usually better readable than a star – compare to <math>b \cdot \ln(a), b \times \ln(a)</math>. --] (]) 08:07, 19 December 2013 (UTC)

Revision as of 18:43, 19 December 2013

Welcome to the mathematics section
of the Misplaced Pages reference desk. skip to bottom Select a section: Shortcut Want a faster answer?

Main page: Help searching Misplaced Pages

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.


Ready? Ask a new question!


How do I answer a question?

Main page: Misplaced Pages:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 13

Number of Hamiltonian paths question

Assume a graph with N vertices and 3*N undirected edges. As N grows large, what is the general configuration of the graph that has the most Hamiltonian paths and what is the general configuration of the graph that has the least non-zero Hamiltonian paths? (And no this isn't homework, this is off of musings on how an Alternate History would have US State boundaries better organized to give more Hamiltonian paths among the states, but in my question above, the graph doesn't have to be Planar).Naraht (talk) 22:52, 13 December 2013 (UTC)

Just split California in half --DHeyward (talk) 03:22, 14 December 2013 (UTC)
Why would splitting California in half be more effective in increasing the number of Hamiltonian paths than say Tennessee?Naraht (talk) 05:39, 15 December 2013 (UTC)


December 14

(-2) to the x

Plot of -2^x

After learning about exponential functions and their graphs, I was curious as to how a function like (-2) would look like, considering that the function is undefined when x is an even fraction. I tried graphing it in different graphing calculators online, but the only one that gave me an actual graph was this. Is this Wolfram Alpha graph accurate, or would a more accurate graph only be certain points that can exist in this function, without a smooth curve? Also, what would the domain for this kind of graph be, considering it can only be natural numbers and odd fractions? Thanks. 50.101.203.177 (talk) 05:00, 14 December 2013 (UTC)

I think it is accurate, but realize that the value of the function is a complex number. Bubba73 05:22, 14 December 2013 (UTC)

You can plot the graph, here is a plot which i have generate myself of -2^x from x=1.001 to x=2.2\\

Perhaps it is easier to look at it in a table format
n -2^n
1. -2.
1.1 -2.03863-0.662392 i
1.2 -1.85863-1.35038 i
1.3 -1.4473-1.99203 i
1.4 -0.815501-2.50985 i
1.5 -5.19574*10^-16-2.82843 i
1.6 0.936764-2.88306 i
1.7 1.90972-2.6285 i
1.8 2.81716-2.04679 i
1.9 3.54947-1.15329 i
2. 4.
2.1 4.07727+1.32478 i
2.2 3.71727+2.70075 i

Ohanian (talk) 03:15, 15 December 2013 (UTC)

(-2) = (2(-1)) = 2(-1) = 2(e) = 2e. Bo Jacoby (talk) 08:30, 17 December 2013 (UTC).

Since you mention the even fractions, you're presumably referring to the elementary (non-complex) definition of ( 2 ) x = ( 2 ) a / b = ( 1 ) a 2 a / b {\displaystyle (-2)^{x}=(-2)^{a/b}=(-1)^{a}2^{a/b}} , defined only for b odd (when a and b are in lowest terms and b > 0 {\displaystyle b>0} ). You can see that it differs from 2 x {\displaystyle 2^{x}} only in the sign, which fluctuates at arbitrarily high frequencies. The graph (which appears continuous because fractions with odd denominator are dense set in the reals) is therefore both 2 x {\displaystyle 2^{x}} and its negative.
In a complex context, this selection of powers is highly arbitrary: we define ( 2 ) x = e x ln 2 {\displaystyle (-2)^{x}=e^{x\ln -2}} , which is in general multivalued because log is. Each point in the "real power" graph corresponds to a different n in ln 2 = ln 2 + ( 2 n + 1 ) i π {\displaystyle \ln -2=\ln 2+(2n+1)i\pi } . --Tardis (talk) 15:18, 19 December 2013 (UTC)

easiest way for answer by heart how many years are between 1944 to 2014

what is the easiest way for answer by heart how many years are between 1944 to 2014. For example, I meet someone and he tell me that he was born in 1944 and I want to know what is his age, so what is the easiest way to know that? (my way today is not easy: I make so: 2013-1944= 69 but it's not easy when I make it by heart, so I'm looking for other methods. Thank you. 213.57.113.25 (talk) 09:54, 14 December 2013 (UTC)

I'm not quite sure what you're asking, but here are two possibilities:
1) How do you quickly subtract 1944 from 2013 ? The way I do it is I subtract 1944 from 2000 to get 56, then I add 13 to that to get 69.
2) How do you calculate ages given the two years, if the birth day isn't known ? Well, find the difference in the years, then you might have to subtract a year if they haven't hit their birthday yet this year (this becomes less likely as it becomes later in the current year, so that by December 31, 2013, you can be certain that everyone born in 1944 is now 69). StuRat (talk) 16:42, 14 December 2013 (UTC)
Except David Briggs, Ene Ergma, Paolo Serpieri and Phyllis Frelich. Tevildo (talk) 11:57, 15 December 2013 (UTC)
Briggs is dead, I'm afraid. Tevildo (talk) 12:00, 15 December 2013 (UTC)
Even for people born on leap day, my method still works to determine their age, in years. You can argue it doesn't calculate their number of birthdays correctly, though, if you claim they only have a birthday every 4 years. StuRat (talk) 13:04, 17 December 2013 (UTC)

An orbit that any point on the planet gets the same amount of radiation

Hi,
I've asked this question in the science desk before,
they advised me to look up the answer here. Anyway my question is if there is an orbit (of a planet), that when you sum up the amount of radiation from its sun in each point on the planet you get the same result .
Somebody has mention that it might be possible with wobbling of the axis of the rotation of this planet, so I would like to know is it possible that it will be existed without any external force from moons or planet around , our imaginary planet.
Thank you.
Exx8 (talk) 10:27, 14 December 2013 (UTC)

It depends on what you mean by the amount of radiation. With a circular orbit every point on the planet would have 50% day and 50% night when you average over a full orbit. (Actually slightly more night than day if you factor in parallax but I'm assuming you can ignore that.) But the solar radiation per unit area is proportional to the sine of the angle of elevation; the poles are colder than the tropics because when the sun is visible there it's never far from the horizon. If you use that as your definition then I'm guessing that the amount would at be close to equal if the axis of rotation was on the orbital plane, so the sun would be directly overhead each pole once a year. But verifying that seems like more more computation than I'm willing to do. --RDBury (talk) 19:39, 14 December 2013 (UTC)
Surely slightly more day than night due to the sun having a greater diameter than the planet. But very small. -- SGBailey (talk) 06:38, 16 December 2013 (UTC)

is the risk of ruin after infinite time always 1?

So, let's say you have "risk of ruin" defined as the percent chance you will go to 0 given iterative bets with expected value (EV) of e and principal b (balance), in discrete bets of 1 (risk 1, lose 1 or gain e), however fractions are allowed in the results, and the bets are weighted heads-or-tails.

Let's say E is 1.5 -- you could start with $100 and then end up with $99 or $101.5, and so on.

The risk of ruin is the chance that you go to zero before you climb out of the 'variance pit' that could steal all your money. For example, if you start with $1 then the risk of ruin is 50%+ as if it comes up tails (if that is the losing value) then you immediately die. Then if you live you have $2.5 however 3 tails in a row still kill you, etc.

Now here is my question. Isn't the following a PROOF that risk of ruin always equals 1 in the long term? Even if you start with $1,000,000 only have to bet $1 each turn, and have payout of $1000 if you win $1 loss if you lose?

This is my argument. Suppose that you have made it to turn "t". You now have balance b. Then, immediately, there is 2^ceiling(b) chance of you losing all your money. If you live, you can just repeat the same formula. Therefore, there are infinite chances to lose all your money.

- Given infinite nonzero chances of a bad event happening - the event will happen sooner or later!!!

In effect I'm modeling the "results" of the 2^ceiling(b) event happening as a "tails" and of it not happening as a "heads". So even if heads are weighted at 1-1/2^(b+1) in favor of heads (hugely in favor of heads), the tails still ALWAYS eventually have to come up in the infinite series. The chances that tails comes up "sooner or later" is 1 - isn't it?

What do you think of that argument? Is it correct? 212.96.61.236 (talk) 13:26, 14 December 2013 (UTC)

No, the probability of ruin is not always equal to one. You can see this with the following simpler example than what you propose. Suppose you start a game with 1 dollar and each round you toss a coin to see if you win 2 dollars or lose 1 dollar. Let the probability of ruin after n rounds be p(n). (So that p(1)=1/2.) By conditioning on the first toss, we see that p(n) satisfies the recurrence relation:
p ( n + 1 ) = 1 + p ( n ) 3 2 . {\displaystyle p(n+1)={\frac {1+p(n)^{3}}{2}}.}
We want to compute lim n p ( n ) {\displaystyle \lim _{n\to \infty }p(n)} . We claim that in fact this limit lies within the interval [ 1 / 2 , 3 / 4 ] {\displaystyle } . Indeed, consider the iteration f ( x ) = ( 1 + x 3 ) / 2 {\displaystyle f(x)=(1+x^{3})/2} . We have f ( 1 / 2 ) > 1 / 2 {\displaystyle f(1/2)>1/2} and f ( 3 / 4 ) < 3 / 4 {\displaystyle f(3/4)<3/4} and on this interval | f ( x ) | < 1 {\displaystyle |f'(x)|<1} . So f is a strict contraction of the interval [ 1 / 2 , 3 / 4 ] {\displaystyle } and so its iterates tend to a fixed point of the iteration. We can find this fixed point explicitly by finding the solution of the equation f ( x ) = x {\displaystyle f(x)=x} in [ 1 / 2 , 3 / 4 ] {\displaystyle } to be x = lim n p ( n ) = ( 1 + 5 ) / 2 0.62 {\displaystyle x=\lim _{n\to \infty }p(n)=(-1+{\sqrt {5}})/2\approx 0.62} . So the probability of ruin in this game (if played infinitely) is only 0.62 {\displaystyle 0.62} . Sławomir Biały (talk) 15:51, 14 December 2013 (UTC)
But that's crazy. It means the answer to the question "if I give you infinite chances to take a 1/2^t shot, there's a good chance you'll miss all infinite of them. " - in fact in your example there's a greater than 1/3 chance you'll miss all infinite of these shots! How can that be true? Even if the odds are vanishingly thin, doesn't the definition of 'infinite' mean that it is 'impossible' to miss all of them? It strikes me that my argument makes a lot more sense. How can you not hit a vanishingly small target given infinite shots? Where does my intuition lead me astray? 212.96.61.236 (talk) 17:04, 14 December 2013 (UTC)
Because t is not constant. If you've made it to balance b, it would take a string of log b {\displaystyle \log b} losses to ruin you. If you avoid ruin, you now have a larger balance, which requires an even longer string to wipe out.--80.109.106.3 (talk) 17:18, 14 December 2013 (UTC)
I don't follow your recurrence relation, but I have an alternate argument. Let q(b) be the probability of eventual ruin when beginning with balance b. Arguing from the law of the iterated logarithm, q(b) goes to 0 for as b goes to infinity. Note that q ( b ) = q ( 1 ) b {\displaystyle q(b)=q(1)^{b}} , so in particular, q(1) must be less than 1. Then q(1) = .5 + .5q(3), which becomes q ( 1 ) 3 2 q ( 1 ) + 1 = 0 {\displaystyle q(1)^{3}-2q(1)+1=0} . Since q ( 1 ) < 1 {\displaystyle q(1)<1} , we can divide by ( q ( 1 ) 1 ) {\displaystyle (q(1)-1)} , which gets us q ( 1 ) 2 + q ( 1 ) 1 {\displaystyle q(1)^{2}+q(1)-1} , whose only positive root is .62.--80.109.106.3 (talk) 17:18, 14 December 2013 (UTC)
You've assumed that q(1)<1, which is specifically what needs to be shown. My recurrence follows from the following. Condition on the outcome of the first toss. The probability of losing the dollar is 1/2. Otherwise, with probability 1/2, you will have three dollars. Now you must lose each of these three dollars in n rounds or fewer is p(n). Loss of each dollar are independent events. Sławomir Biały (talk) 19:38, 14 December 2013 (UTC)
I did not assume that. I said it followed from the law of the iterated logarithm, which I later fleshed out below.
I don't see why those would be independent events. In fact, consider the probability of losing 3 dollars in 1 round; it's 0, not p(1)^3.--80.109.80.78 (talk) 20:41, 14 December 2013 (UTC)
Yes, you're right. They aren't independent. I wrote that without thinking it through. Sławomir Biały (talk) 02:02, 15 December 2013 (UTC)
Here's a more general argument that if your expected value for one flip is positive, the probability of ruin is less than 1. Let μ {\displaystyle \mu } be the expected value of a single flip, and σ {\displaystyle \sigma } be the standard deviation. From the law of the iterated logarithm (using the notation defined on that page), we can conclude
lim sup n | S n n μ | n log log n = σ 2 {\displaystyle \limsup _{n\to \infty }{\frac {|S_{n}-n\mu |}{\sqrt {n\log \log n}}}=\sigma {\sqrt {2}}} almost surely.
So almost surely, | S n n μ | < n 2 / 3 {\displaystyle |S_{n}-n\mu |<n^{2/3}} for all sufficiently large n {\displaystyle n} . Let P k {\displaystyle P_{k}} be the probability that | S n n μ | < n 2 / 3 {\displaystyle |S_{n}-n\mu |<n^{2/3}} for all n k {\displaystyle n\geq k} . By continuity of measure, P k > 0 {\displaystyle P_{k}>0} for all sufficiently large k {\displaystyle k} . Choose k {\displaystyle k} with P k {\displaystyle P_{k}} positive and for all n k {\displaystyle n\geq k} , n 2 / 3 < n μ {\displaystyle n^{2/3}<n\mu } . Consider the situation where our starting balance b {\displaystyle b} is so large, it is impossible to bust within the first k {\displaystyle k} flips. Then we will have probability at least P k {\displaystyle P_{k}} of never busting at all, since busting would require n μ S n = n μ > n 2 / 3 {\displaystyle n\mu -S_{n}=n\mu >n^{2/3}} .
Now consider the game where we start with 1 dollar. There is some positive probability q {\displaystyle q} of reaching balance b {\displaystyle b} , after which our chance of never busting is at least P k {\displaystyle P_{k}} . So the chance of never busting from 1 dollar is at least q P k > 0 {\displaystyle qP_{k}>0} .--80.109.106.3 (talk) 17:55, 14 December 2013 (UTC)

Could someone elaborate on the outcome of the example by Sławomir Biały "So the probability of ruin in this game (if played infinitely) is only 0.62" by putting it into laymen's terms? I'm looking for this kind of statement: If you play long enough, out of 100 candidates, 62 will lose and 38 will win. The 38 must be wrong. Only losers are declared loser, the rest must play again. No one will ever be able to say he's a winner. The chance of winning a game where you cannot become a winner is 0, and I wonder if that doesn't actually mean that 100% must lose. Probably, taking "a 100 candidates" as an example doesn't work when infinity is involved, but if that's the case I don't understand what 0.62 actually does mean. Joepnl (talk) 00:23, 15 December 2013 (UTC)

You can have winners; a winner is someone who is never ruined. But okay, here's the statement: if 100 people play the game, you expect 62 to eventually run out of money, while the remaining 38 will play forever.--80.109.80.78 (talk) 03:52, 15 December 2013 (UTC)
Thanks. It's hard to stay away from thinking "so just wait for 62 to be ruined, now the 38 left must have somehow 0 chance of being ruined ever" but I think I get it :) Joepnl (talk) 16:58, 15 December 2013 (UTC)
Keep in mind that "expect" is a technical term. It doesn't mean it will necessarily happen; you might be waiting forever for the 62nd person to be ruined.--80.109.80.78 (talk) 20:07, 15 December 2013 (UTC)


December 15

Mathematics of lotteries like Canada's Lotto Max

I have a question about "beating the odds" in lottery. Please note this is strictly a mathematical curiosity, and I'm not trying to get rich by gambling. I'm going to be using Canada's Lotto Max lottery as an example, but this idea should apply to most lotteries. In Lotto Max, 45% of the ticket sales is dedicated to prize money. This number is constant in the long term, and in weeks where the jackpot(s) are not won, they are "rolled over" and added to the next week's prize money. The main jackpot is capped, and additional smaller jackpots are created along side it if this goes on for multiple weeks. All of the jackpot money is inevitably won, and the lottery goes back to its standard jackpot.

Beating the system: On a national, long-term average of all gamblers, every gambler will get back 45% of the money he gambled away. The "house", or the provincial lottery corporations and the network of dealers, will keep 55% of the gamblers' money. So let's just say, you were an investor, willing to buy extremely large numbers of tickets as an investment, given a large enough sample size (i.e. you buy millions and millions of tickets over a long period), you can expect to lose 55% of your money.

But this only applies to distributing your "investment" more or less evenly and blindly. The fact is, many weeks have no jackpot winners. In those weeks, the average gambler gets back maybe 5% of their money. (This number can be calculated as the total of all small prizes paid out, as a percentage of the ticket revenue that week. The weeks immediately after that, are statistically, over the long term, certain to pay back more than 45%, to keep the overall average at 45%. So if this "investor" decides to never buy tickets on "standard" jackpot weeks, and always buy his tickets when the jackpots are inflated, he would (again, given an extremely large sample size) statistically expect to be paid back more than 45% of his money, thereby beating the odds that a non-selective gambler has.

This effect can be greatly increased by only buying tickets when the jackpot has not been won for several consecutive weeks, is maxed out, and there are many additional smaller jackpots. For example, this week, as of this writing, the jackpot will be 50 million, plus 50 x 1 million jackpots, for a total of 100 million, compared with the standard 10 million jackpot.

This "selective" buying can clearly improve your odds over the 45% payout, but I'm wondering if this will every push them beyond 100%. What I'm trying to figure out is if it's possible that this "selective" buying can, ignoring chance (which over an extremely large sample size is actually predictable, for example flipping a coin a million times will almost inevitably result in heads being 49-51%) can be used to actually gain a simple predictable edge over not just the other gamblers (earning more than 45%) but over the house (earning more than 100%).

Again, I can't stress the issue of sample size. This is basically considering that you are a billionaire willing to literally buy a billion tickets or more over several years of selective buying. Remember that the odds of winning this lottery are something like 1 in so many million, not billion, so by buying a billion tickets, you are unlikely NOT to win several times. It's just a matter of winning a little more or a little less than the statistical average you expect to win. The more you spend, the less it will vary from that number. In fact, taking this thinking to a further (impossible) level, if one gamble spent not billions, but trillions of dollars, then he would be able to with almost complete certainty achieve a number close tot that "average" (agian, think flipping a coin a million times).

Okay, that's the whole question! Now please give me some solid, foolproof, simple explanation why this WOULDN'T work, because I don't see any billionaires beating the system! Thanks for your input. — Preceding unsigned comment added by 2001:4C28:194:520:5E26:AFF:FEFE:6AF8 (talk) 19:59, 15 December 2013 (UTC)

Note that if a week's jackpot incorporates rollover from the previous week, the formula for the jackpot is A + .45Cn, where A is the rollover, C is the cost of a ticket and n is the number of tickets sold for that week. Meanwhile, the probability of any given ticket winning is 1/n. Thus the expected value of a ticket is A + .45 C n n {\displaystyle {\frac {A+.45Cn}{n}}} , which decreases as n gets larger. If the expected value of a ticket is more than its cost, a rational gazillionaire will buy tickets; but doing this drives down the value of a ticket. Thus we would expect that the value of a ticket to reach equilibrium at its cost -- if it were above the cost, people would respond by buying more, which would drive the value back down. In fact, people are willing to buy tickets even when their expected value is significantly less than their cost, which just drives the value down even further.
So no, with the payout structure you describe, there's no way to get an expected return greater than your outlay. (But I should mention that occasionally, usually as some sort of promotion, lotteries will use payout structures where it actually is possible to have a greater expected return than cost.)--80.109.80.78 (talk) 20:25, 15 December 2013 (UTC)
In fact, I question your claim "This effect can be greatly increased by only buying tickets when the jackpot has not been won for several consecutive weeks". When the prize pool gets uncharacteristically large, people buy more tickets; I'd be curious to see the data for ticket sales vs. prize pool. I suspect that the expected value of a ticket peaks at a small rollover.--80.109.80.78 (talk) 20:56, 15 December 2013 (UTC)
National Lottery (Ireland)#History of Lotto doesn't say how many tickets were sold in total, only how many were sold to a syndicate, so I don't know their expected winnings but it sounds like they were positive. I don't understand why the lottery tried to stop the syndicate. At first it seems to me the lottery should be happy for anyone buying tickets, but maybe they were worried the syndicate would decrease total sales in future drawings. PrimeHunter (talk) 01:58, 16 December 2013 (UTC)
Yeah, I also don't see the issue (although that page definitely needs rewriting for neutral point of view). The syndicate got lucky that no one else had the same idea, or they probably would have walked away with a loss.--80.109.80.78 (talk) 02:17, 16 December 2013 (UTC)

December 16

all-invertible subspace

What is the largest dimension of a subspace of M 31 ( R ) {\displaystyle M_{31}(\mathbb {R} )} in which all matrices except 0 are invertible? --18:57, 16 December 2013 (UTC) — Preceding unsigned comment added by 85.65.26.40 (talk)

One dimensional. The determinant defines a homogeneous polynomial of degree 31 on the projective space of M_31. Restricting this to a line in the projective space (which is a 2-plane in M_31) gives a real polynomial of degree 31 in a real variable. Since every real polynomial of odd degree has a root, the determinant must vanish somewhere on this line. So the only linear sub spaces of the projective space in which the determinant is never zero are just the points (that is, the 1d sub spaces of M_31). Sławomir Biały (talk) 19:34, 16 December 2013 (UTC)
Interestingly, this problem seems to get much more difficult in even dimensions. In M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} with n 2 ( mod 4 ) {\displaystyle n\equiv 2{\pmod {4}}} , the identity and a complex structure on R n {\displaystyle \mathbb {R} ^{n}} span a two-dimensional subspace containing no nonzero singular matrix. If n 0 ( mod 4 ) {\displaystyle n\equiv 0{\pmod {4}}} then a quaternionic structure on R n {\displaystyle \mathbb {R} ^{n}} defines a four-dimensional subspace containing no nonzero singular matrix. More generally, when 2 k {\displaystyle 2^{k}} divides n, there is a faithful representation of a 2 k {\displaystyle 2^{k}} -dimensional Clifford algebra on R n {\displaystyle \mathbb {R} ^{n}} . The degree one elements of this algebra act as invertible linear transformations (they are elements of the Pin group plus dilations). Conjecturally, this is the optimal situation, modulo details, but I don't have a proof of this. Sławomir Biały (talk) 00:17, 18 December 2013 (UTC)

December 17

Knights problem

What is the optimum placement of knights on an indefinitely large chessboard (ignoring edge effects) so that every square (including squares on which the knights are placed) is attacked? "Optimum" means number of knights divided by total number of squares is minimised. When I first conceived of this question I assumed that the answer would be a repeating pattern but now I'm not even sure about that. — Preceding unsigned comment added by 86.151.119.39 (talk) 14:38, 17 December 2013 (UTC)

You can find a placement where every square is attacked precisely once, giving you a knight/square ratio of 1/8, the best possible. Here's one example: place a knight on a square, then move down 1 and place a knight on that square. Then move down 1 and right 2 and repeat. This will cover a diagonal band 8 squares high. You can then cover the entire board with repeated bands.--80.109.80.78 (talk) 15:11, 17 December 2013 (UTC)
Figure 6 in gives another example. PrimeHunter (talk) 16:57, 17 December 2013 (UTC)

Ah, thanks, I thought the answer was more complicated. — Preceding unsigned comment added by 86.151.119.39 (talk) 20:32, 17 December 2013 (UTC)

December 18

Irreducible representations of sl(2;C)

Regarding an old question: https://en.wikipedia.org/Wikipedia:Reference_desk/Archives/Mathematics/2013_February_16#Notation_for_representation_theory_of_the_Lorentz_group

Would it be correct to say that the (m,n) representations for m ≠ 0,n = 0 are complex linear, while those for m = 0,n ≠ 0 are conjugate linear, and that those with m ≠ n, m,n ≠ 0 are either "neither" or perhaps "sesquilinear", and finally those with m = n are real linear?

(Sesquilinear supposedly means "one-and-a-half linear".) YohanN7 (talk) 17:32, 18 December 2013 (UTC)

The reason I ask is that I'm editing Representation theory of the Lorentz group a bit on these matters. Any input is welcome so it gets right. YohanN7 (talk) 21:09, 18 December 2013 (UTC)

Proving That Ln'(x) = ⁠1/x⁠ Without Using e = (1 + ⁠1/n⁠)

Where e is the sum of the reciprocals of factorials, and ln = loge. — 79.113.241.241 (talk) 21:03, 18 December 2013 (UTC)

x = exp(y)
dx/dy = x
ln(x) = y
(differentiate wrt y) dx/dy d(ln(x))/dx = 1
d(ln(x))/dx = 1/(dx/dy) = 1/x — Preceding unsigned comment added by 86.160.218.11 (talk) 22:00, 18 December 2013 (UTC)
And how exactly would we know that the derivative of e is e ? — 79.113.241.241 (talk) 22:08, 18 December 2013 (UTC)
You defined e using the power series. This can be differentiated, and the result that e is its own derivative follows. 2.25.141.83 (talk) 23:34, 18 December 2013 (UTC)
No. I defined it (i.e., the number e) as a simple sum. The more general Taylor series for the exponential function would be based on its derivatives: but it is precisely these derivatives whose expression we're seeking. — 79.113.241.241 (talk) 00:29, 19 December 2013 (UTC)
You are assuming that you already know what e^x means, for any real x, given that you know the value of e. Actually, this is not so obvious, so instead we can define e^x equal to the power series, and then show that the resulting definition satisfies the expected power laws (and subsequently use the meaning of exp() and ln() to define the meaning of a^b generally for irrational numbers). 86.160.218.11 (talk) 01:14, 19 December 2013 (UTC)
And how precisely would we prove that ? I'm not too good at multiplying two infinite series. — 79.113.241.241 (talk) 01:35, 19 December 2013 (UTC)
I think you can just set up the equation (1 + x + x^2/2! + x^3/3! + ...)(1 + y + y^2/2! + y^3/3! + ...) = 1 + (x + y) + (x + y)^2/2! + (x + y)^3/3! + ... and show that the coeffecients of general x^i y^j are the same on both sides (using binomial theorem for the rhs). Of course, this probably omits some technical details of why it is valid to multiply infinite series term by term. Someone smarter than me will have to explain that part. 86.160.218.11 (talk) 02:11, 19 December 2013 (UTC)
I've tried with Mathematica for small values of n, and it does indeed work (first good news so far), BUT I cannot either "see" it or prove it, unfortunately... and I was kinda hoping someone might be able to guide me through it. — 79.113.241.241 (talk) 02:32, 19 December 2013 (UTC)
Sorry, I'm lost, what exactly are you trying to show? (Not what I outlined above for exp(x) exp(y) = exp(x + y), clearly, since there is no n in that.) 86.160.218.11 (talk) 02:38, 19 December 2013 (UTC)
Yes, I've limited the number of terms in each of the two series above to just a few, then used Mathematica to expand the parentheses. And I did manage to get the first few terms from the third series . So I'm half-way there, but I still need to figure out the general mechanism or algorithm as to how this happens. — 79.115.133.61 (talk) 16:09, 19 December 2013 (UTC)
Oh, I see, well, as I say, just compare the coefficients of x^i y^j. On the lhs you get this term by multiplying x^i/i! by y^j/j!, so the coefficient is 1/(i!j!). On the rhs, x^i y^j must come from the term (x + y)^(i + j)/(i + j)!. Using the binomial theorem, the coffecient of x^i y^j in the expansion of (x + y)^(i + j) is (i + j)!/(i!j!), then dividing by the (i + j)! we end up with the same as the lhs for all i and j. 86.176.211.137 (talk) 18:08, 19 December 2013 (UTC)
Thanks ! :-) — 79.115.133.61 (talk) 18:16, 19 December 2013 (UTC)

It is first of all easy to establish that d d x e x = e x {\displaystyle {\frac {d}{dx}}e^{x}=e^{x}} using the power series definition of the exponential function (what other rigorous definition of it would you use?). We then use implicit differentiation of e y = x {\displaystyle e^{y}=x} with respect to y.--Jasper Deng (talk) 01:58, 19 December 2013 (UTC)

Then prove that ( 1 n ! ) x = x n n ! {\displaystyle \left(\sum {\tfrac {1}{n!}}\right)^{x}=\sum {\tfrac {x^{n}}{n!}}} 79.113.241.241 (talk) 02:07, 19 December 2013 (UTC)
How do you know what ( 1 n ! ) x {\displaystyle \left(\sum {\tfrac {1}{n!}}\right)^{x}} means? 86.160.218.11 (talk) 02:13, 19 December 2013 (UTC)
How do we know what exponentiation means ? Seriously ? — 79.113.241.241 (talk) 02:29, 19 December 2013 (UTC)
The superscript notation is only a useful shorthand for a function that we define.--Jasper Deng (talk) 02:49, 19 December 2013 (UTC)
(ec) For irrational exponents, yes, seriously. 86.160.218.11 (talk) 02:53, 19 December 2013 (UTC)
Not quite sure why... x π = x 3.14... = x 3 + 0.1 + 0.04 + . . . = x 3 x 0.1 x 0.04 . . . = x 3 x 10 x 4 100 . . . {\displaystyle x^{\pi }=x^{3.14...}=x^{3+0.1+0.04+...}=x^{3}\cdot x^{0.1}\cdot x^{0.04}\cdot ...=x^{3}\cdot {\sqrt{x}}\cdot {\sqrt{x^{4}}}\cdot ...} 79.115.133.61 (talk) 16:09, 19 December 2013 (UTC)
It's true you can do that, but it's horrible and arbitrary, and you then have the problem of showing that all the infinitely many ways you can do it converge to the same number. So much nicer to use exp()! 86.176.211.137 (talk) 18:11, 19 December 2013 (UTC)
Convergence is not a problem, since for each term the exponent of x is bound (0...9), while the order of the radical decreases exponentially (10). Or by using the squeeze theorem for each term of the form x 0 b < x a b < x 9 b {\displaystyle x^{\frac {0}{b}}<x^{\frac {a}{b}}<x^{\frac {9}{b}}} . — 79.115.133.61 (talk) 18:30, 19 December 2013 (UTC)
Yeah, you can do it, but I still think as a definition it's ugly and arbitrary (arbitrary since you can write it in infinitely many ways, and there is no reason to choose one rather than the other). 18:43, 19 December 2013 (UTC)
(edit conflict)Why make life unnecessarily hard? It is rather coincidental really that the derivative of that particular power series is itself. The proof of that is simple, since e is equal to the power series on the right evaluated at x=1 (remember that a b = exp ( b ln ( a ) ) {\displaystyle a^{b}=\exp(b*\ln(a))} - the definition of the natural logarithm function means ln(a)=1 when a=e). Don't bother trying to expand the expression on the left, you don't even know how many terms you are adding up. (for this I'm assuming that the expression on the left means Π ( 1 n ! ) {\displaystyle \Pi \left(\sum {\tfrac {1}{n!}}\right)} ).--Jasper Deng (talk) 02:19, 19 December 2013 (UTC)
Side note: a flower-like symbol of a star is not the best choice for multiplication. We have no better choice in ASCII, but in LaTeX ther are also \cdot and \times, usually better readable than a star – compare to b ln ( a ) , b × ln ( a ) {\displaystyle b\cdot \ln(a),b\times \ln(a)} . --CiaPan (talk) 08:07, 19 December 2013 (UTC)
I think the problem with this question is that one needs to say what definition one is using, not what definition one isn't using. The article Exponential function gives three different equivalent ways one can define the function and there's more. One has to decide which one to start from. The first one, power series definition, is the obvious choice, the second starting point there is immediately equivalent to the conclusion and the third definition is based on what they don't want to assume. Dmcq (talk) 13:17, 19 December 2013 (UTC)

Here's a naive approach. Let "log" denote the logarithm in any base. Then

log ( x ) = lim h 0 log ( x + h ) log x h = lim h 0 1 h log ( 1 + h x ) . {\displaystyle \log '(x)=\lim _{h\to 0}{\frac {\log(x+h)-\log x}{h}}=\lim _{h\to 0}{\frac {1}{h}}\log(1+{\frac {h}{x}}).}

Now make a change of variables u = h / x {\displaystyle u=h/x} so that this becomes

1 x lim u 0 1 u log ( 1 + u ) {\displaystyle {\frac {1}{x}}\lim _{u\to 0}{\frac {1}{u}}\log(1+u)}

which is 1/x times some constant (I won't prove that this limit exists, but it can be done using elementary estimates). The natural logarithm is then by definition the logarithm in the base such that this constant is equal to one. Sławomir Biały (talk) 14:22, 19 December 2013 (UTC)

Slawomir, you're brilliant even when you're "naive" ! — 79.115.133.61 (talk) 16:12, 19 December 2013 (UTC)

trig equation

(sqrt(2) - 1)(1 + sec α) = tan α

By "inspection" α = 45deg, but how could I show this rigorously (and show whether it is the only solution)? 2.25.141.83 (talk) 21:58, 18 December 2013 (UTC)

There is probably a tricky solution using half-angle formulas, but a more general method is to convert to an algebraic equation using
cos α = 1 t 2 1 + t 2 , sin α = 2 t 1 + t 2 . {\displaystyle \cos \alpha ={\frac {1-t^{2}}{1+t^{2}}},\sin \alpha ={\frac {2t}{1+t^{2}}}.}
This reduces the equation, after a bit of cancellation, to t = √2 - 1. Plugging this back in gives cos α = sin α = 1/√2 or α = π/4 + 2kπ. --RDBury (talk) 00:37, 19 December 2013 (UTC)
Are you sure those are all the solutions? 86.160.218.11 (talk) 01:23, 19 December 2013 (UTC)
RDBury's method is a parameterization of the whole unit circle, with the exception of the point ( 1 , 0 ) {\displaystyle (-1,0)} where t = ± {\displaystyle t=\pm \infty } , which is actually also a solution of the original problem. (This parameterization but with sine and cosine are interchanged gives all solutions, since then the point at infinity is no longer in the domain of the original problem.) Sławomir Biały (talk) 14:06, 19 December 2013 (UTC)
What about α = π, for instance? That is also a solution to the original problem but is not of the form α = π/4 + 2kπ. 86.176.211.137 (talk) 14:46, 19 December 2013 (UTC)
You were inattentive. I do discuss this case. Sławomir Biały (talk) 15:17, 19 December 2013 (UTC)
Sorry, I didn't read it properly. 86.176.211.137 (talk) 18:13, 19 December 2013 (UTC)
sec is 1/cos and tan is sin/cos, so multiply through by cos α. you get something like
(√2 - 1) (cos α + 1) = sin α
which can be solved various ways. E.g. from the formula for sin (a + b) you can rewrite it as
sin (α + A) = B
for some A and B which is easily solved.--JohnBlackburnedeeds 00:39, 19 December 2013 (UTC)
You should remember, however, as always when multiplying an equation, that the multiplier may be zero, which degenerates the equation to 0 = 0 form, thus introducing false solutions. Here cos(α) might be zero, so the final equation would be satisfied by any α = + π/2 (for integer n), which is not necessarily true for the original equation. Luckily in this problem which is discussed here, the original equation contains a term tan(α), which excludes cosine's zeros from the domain. Anyway this exclusion should be explicitly made in the final solution, just to be on the safe side... --CiaPan (talk) 08:00, 19 December 2013 (UTC)

December 19

sin(infinity)

Look at Indeterminate form. Is sin ( ) {\displaystyle \sin(\infty )} indeterminate?? If not, please explain why. Georgia guy (talk) 01:47, 19 December 2013 (UTC)

It is. The limit does not exist, because for every epsilon, not every delta satisfies the condition of an epsilon-delta definition.--Jasper Deng (talk) 01:50, 19 December 2013 (UTC)

Nevertheless, this method of evaluating the Fresnel integrals at infinity seems to work if we assume this limit is 0 (which of course it is not).--Jasper Deng (talk) 01:54, 19 December 2013 (UTC)--Jasper Deng (talk) 01:54, 19 December 2013 (UTC)


No, it's not an "indeterminate form". The term "indeterminate form" is a historical one and is restricted to an enumerated list of, I think, seven, and sin ( ) {\displaystyle \sin(\infty )} did not happen to get included.
However, like the enumerated indeterminate forms, it is a function applied to a point where there is no continuous extension of that function to that point. So it could have been an indeterminate form. It just didn't make the list, because no one thought about it. There is no point in adding it now, because we have better ways of talking about such functions. --Trovatore (talk) 07:53, 19 December 2013 (UTC)
Indeterminate form is applied to the same sort of thing but using operations like + and × not written in functional form like add(1,2). The problem with those straightforward operators is people tend to do transformations on them without thinking whereas with functions they don't normally, they apply general rules for limits with functions. Dmcq (talk) 13:04, 19 December 2013 (UTC)

Product of normal distrubution and another distrubution

Hi all,

I know this is probably a really simple question, but it's been while since my statistics A level... So performance of employees at a company is assumed to be standard-ly normally distributed, with each employee given a rating according to following table:

Bottom 3% 20% 55% 20% Top 3%
Significantly Below Below Average Above Significantly Above

They are then given a bonus amount depending on rating:

Significantly Below Below Average Above Significantly Above
0% 3% 6% 8.4% 11.25%

Clearly this second distributed is not symmetrical - how would you find the product of these 2 distributions to get a 3rd so that you could compare the bonus of one emploee with the rest of the company/compared to the average bonus? What would be a measure of it's asymmetry?

Thanks for your help! 80.254.147.164 (talk) 14:32, 19 December 2013 (UTC)

If you pick an employee at random, there's a 3% chance his bonus is 0%, a 20% chance his bonus is 3%, a 55% chance his bonus is 6%, a 20% chance his bonus is 8.4%, and a 3% chance his bonus is 11.25%. That's the distribution. 150.203.188.53 (talk) 16:57, 19 December 2013 (UTC)
Categories: