Revision as of 16:02, 26 February 2012 editMiszaBot I (talk | contribs)234,552 editsm Archiving 1 thread(s) (older than 90d) to Talk:Floating point/Archive 4.← Previous edit | Latest revision as of 10:49, 17 November 2024 edit undoVoodood (talk | contribs)Extended confirmed users728 edits head | ||
(118 intermediate revisions by 43 users not shown) | |||
Line 1: | Line 1: | ||
{{talk header}} | |||
{{talkheader|search=yes}} | |||
{{WikiProject |
{{WikiProject banner shell|class=B|vital=yes|1= | ||
{{WikiProject Computing|importance=Top|science=y|science-importance=Top}} | |||
{{WikiProject Computer science|importance=Top}} | |||
}} | |||
{{User:MiszaBot/config | {{User:MiszaBot/config | ||
|archiveheader = {{aan}} | |archiveheader = {{aan}} | ||
|maxarchivesize = 100K | |maxarchivesize = 100K | ||
|counter = |
|counter = 5 | ||
|minthreadsleft = 4 | |minthreadsleft = 4 | ||
|minthreadstoarchive = 1 | |minthreadstoarchive = 1 | ||
|algo = old(90d) | |algo = old(90d) | ||
|archive = Talk:Floating |
|archive = Talk:Floating-point arithmetic/Archive %(counter)d | ||
}} | }} | ||
{{ |
{{archives|bot=Lowercase sigmabot III|age=3|units=months}} | ||
== Software of book side of history == | |||
I just reverted a bit about the Pilot Ace in history because it used software to emulate floating point. However it occrs to me that there might be something worthwhile in the bit about J.H.Wilkinson, ''Rounding errors in algebraic processes''. Is there evidence about who wrote a book about floating point or that this was a particular turning point? ] (]) 18:46, 6 February 2012 (UTC) | |||
== IEEE 754 == | |||
I have added a section discussing the "big picture" on the rationale and use for the IEEE 754 features which often gets lost when discussing the details. | |||
I plan to add specific references for the points made there (from Kahn's web site). It would be good to expand the examples and add additional ones as well. | |||
] (]) <span style="font-size: smaller;" class="autosigned">—Preceding ] comment added 11:22, 19 February 2012 (UTC).</span><!--Template:Undated--> <!--Autosigned by SineBot--> | |||
:You need to cite something saying these were accepted rationales for it. Citations point to specific books journals or newspapers and preferably page number ranges. ] (]) 13:51, 19 February 2012 (UTC) | |||
Added direct citations as requested. | |||
] (]) <span style="font-size: smaller;" class="autosigned">—Preceding ] comment added 18:20, 19 February 2012 (UTC).</span><!--Template:Undated--> <!--Autosigned by SineBot--> | |||
:Thanks. My feeling about Kahan and his diatribe against Java is that he just doesn't get what programmers have to do when testing a program. Having a switch to enable lax typing of intermediate results where you know it ill only be run in environments you've tested is a good idea but that wasn't what Java was originally designed for. The section about extended precision there seems undue in length as I'm pretty certain other considerations like signed zero and denormal handling were the main original considerations where it differed from previous implementations. ] (]) 20:37, 19 February 2012 (UTC) | |||
:Although I referenced Kahan's Java paper several times, I certainly didn't want this section to appear as a slight against Java. Kahan has several other papers discussing the need for extended precision that do not mention Java-- I will replace the current references with those in the near future, and try to trim it down (although I don't think that that reference is a diatribe against Java, just against its numerics). I certainly didn't want to get into the tradeoffs between improved numerical precision of results versus exact reproducibility in Java in this section. I do however think that it is important to clarify the intended use of the IEEE754 features in an introductory article like this, which can get lost in detailed descriptions of the features. In particular, I find that there is *wide* misunderstanding of the intended use of, and need for, extended precision amongst the programming community, particularly as extended precision was historically not supported in several RISC processors, and thus it is underused by programmers, even when targeting the x86 platform for e.g. HPC (even when these same programmers would carry additional significant figures for intermediate calculations if doing the same computations by hand, as alluded to in this section). Also, Kahan's descriptions of work on the design of the x87 (based on his experience designing HP calculators which use extended precision internally) makes it clear that extended precision was intended as a key feature (indeed a recommended feature) of IEEE754, compared with previous implementations. | |||
] (]) 00:56, 20 February 2012 (UTC) | |||
:As far as I'm aware the main other rationales were | |||
::To have a sound mathematical basis in that results were correctly rounded versions of accurate results and also so reasoning about the calculations would be easier. | |||
::Round to even was used to improve accuracy. In fact this is much more important than extended precision if the double storage mode is only used for intermediate calculations. Using extended precision only gives bout one extra bit overall at the end if values in arrays are in doubles. The main reason I believe they were put in was it made calculating mathematical functions much easier and more accurate, they can also be used in inner routines with benefit. | |||
::Biased rounding was put in I believe to support interval arithmetic - another part of being able to guarantee the results of calculations. ] (]) 15:43, 20 February 2012 (UTC) | |||
:::''Using extended precision only gives bout one extra bit overall at the end if values in arrays are in doubles''. This is false in general; you must be thinking of some special cases where not many intermediate calculations happen before rounding to double for storage. For a counterexample, e.g. consider a loop to take a dot product of two double-precision arrays (not using Kahan summation etc.) ] (]) 21:16, 20 February 2012 (UTC) | |||
::::You would normally get very little advantage in that case over round to even with so few intermediate calculations. And for longer calculations round to even wins over just using a longer mantissa and rounding down. You only get a worthwhile gain if the storage is in extended precision. ] (]) 21:53, 20 February 2012 (UTC) | |||
:::::That is certainly not the case in general. The examples you are thinking of are using simple exactly rounded single arithmetic expresions-- the advantage of extended precision is avoiding loss of precision in more complicated numerically unstable formulae-- e.g. it is easy to construct examples were even computing a quadratic formula discriminant can cause massive loss of ULP when computed in double but not in double extended. Several examples are given in the Kahan references. This is in addition to the advantage of the extended exponent in avoiding overflow in e.g. dot products. ] (]) 00:16, 22 February 2012 (UTC) | |||
:::When you say ''Round to even was used to improve accuracy.'', I take it you are mainly referring to the exact rounding: breaking ties by round to even does avoid some additional statistic biases but it is rather subtle (might be worth mentioning the main text though..). ] (]) 00:16, 22 February 2012 (UTC) | |||
::: ''Biased rounding was put in I believe to support interval arithmetic''. Yes, I believe directed rounding was included to support interval arithmetic, but also for debugging numerical stability issues-- if an algorithm gives drastically different results under round to + and - infinity then it is likely unstable. ] (]) 00:16, 22 February 2012 (UTC) | |||
::: ''As far as I'm aware the main other rationales were... to have a sound mathematical basis in that results were correctly rounded versions of accurate results and also so reasoning about the calculations would be easier.''. Yes, the exact rounding is an important point-- I have added some additional text earlier in the article to expand on this. It is true that, like previous arithmetics, having a precise specification to allow expert numerical analysts to write robust libraries was an important consideration, but the unique aspect of IEEE-754 is that it was also aimed at a broad market of non-expert users and so I focused in the section on the robustness features relevant to that (I will add some text highlighting that aspect as well though). ] (]) 00:16, 22 February 2012 (UTC) | |||
== Lead section edits == | |||
::::Well exact rounding, but I thought it better to specify the precise format they have. The point is that rounding rather than truncating is what really matters. With rounding the error only tends to go up with the number of computations as the square root of the number of operations whereas with directed rounding it goes up linearly. Even the reduction of bias by round to even matter in this. You alwayts get something else putting in a little bias so it is not as good as this but directed rounding is really bad. You're better off just perturbing the original figures for stability checking. | |||
::::The mathematical basis makes it much easier to do things like construct longer precision arithmetic packages easily, in fact the fused multiply is particularly useful for this. ] (]) 00:27, 22 February 2012 (UTC) | |||
:::::The use of directed rounding for diagnosis of stability issues is discussed here http://www.cs.berkeley.edu/~wkahan/Stnfrd50.pdf and in other references at that web site. It also discusses why perturbation alone is not as useful. IEEE 754-2008 annex B states this explicitly-- "B.2 Numerical sensitivity: Debuggers should be able to alter the attributes governing handling of rounding or exceptions inside subprograms, even if the source code for those subprograms is not available; dynamic modes might be used for this purpose. For instance, changing the rounding direction or precision during execution might help identify subprograms that are unusually sensitive to rounding, whether due to ill-condition of the problem being solved, instability in the algorithm chosen, or an algorithm designed to work in only one rounding- direction attribute. The ultimate goal is to determine responsibility for numerical misbehavior, especially in separately-compiled subprograms. The chosen means to achieve this ultimate goal is to facilitate the production of small reproducible test cases that elicit unexpected behavior." ] (]) 01:04, 22 February 2012 (UTC) | |||
::::::The uses that somebody makes of features is quite a different thing from the rationale for why somebody would pay to have them implemented. The introduction to the standard gives a succinct summary of the main reasons for the standard. I'll just copy the latest here so you can see | |||
I to try to tidy it up in the following ways: | |||
:a) Facilitate movement of existing programs from diverse computers to those that adhere to this standard as well as among those that adhere to this standard. | |||
:b) Enhance the capabilities and safety available to users and programmers who, although not expert in numerical methods, might well be attempting to produce numerically sophisticated programs. | |||
:c) Encourage experts to develop and distribute robust and efficient numerical programs that are portable, by way of minor editing and recompilation, onto any computer that conforms to this standard and possesses adequate capacity. Together with language controls it should be possible to write programs that produce identical results on all conforming systems. | |||
:d) Provide direct support for | |||
::― execution-time diagnosis of anomalies | |||
::― smoother handling of exceptions | |||
::― interval arithmetic at a reasonable cost. | |||
:e) Provide for development of | |||
::― standard elementary functions such as exp and cos | |||
::― high precision (multiword) arithmetic | |||
::― coupled numerical and symbolic algebraic computation. | |||
:f) Enable rather than preclude further refinements and extensions. | |||
::::::There are other things but this is what the basic rationale was and is. Directed rounding was for interval arithmetic. ] (]) 01:56, 22 February 2012 (UTC) | |||
- Previously the opening sentence was "In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision." I found this opaque (what is "arithmetic using formulaic representation"?) and oblique (it doesn't tell you what a floating-point number is, it only talks about an attempted "trade-off"). I think Misplaced Pages articles should open by defining the thing at hand directly, rather than talking around it. Therefore, the new opening sentence explicitly describes floating-point representation: "In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precisison, called the mantissa, scaled by an integer exponent of a fixed base." | |||
:::::::Thanks. Actually, I believe that "d) Provide direct support for― execution-time diagnosis of anomalies" is referring to this use of directed rounding to diagnose numerical instability. Certainly Kahan makes it clear that he considered it a key usage from the early design of the x87. I agree that its use for interval arithmetic was also considered from the beginning. ] (]) 02:11, 22 February 2012 (UTC) | |||
::::::::No that refers to identification and methods of notifying the various exceptions and the handling of the signalling and quiet NaNs. Your reference from 2007 does not support in any way that arbitrarily jiggling the calculations using directed rounding was considered as a reason to include directed rounding in the specification. He'd have been just laughed at if he had justified spending money on the 8087 for such a purpose when there are easy ways of doing something like that without any hardware assistance. ] (]) 08:23, 22 February 2012 (UTC) | |||
- Both "significand" and "mantissa" are used to describe the non-exponent part of a floating-point number, but "mantissa" is far more common, so I think it's the better choice. (Google: "floating-point mantissa" yields 672,000 results; "floating-point significand" yields 136,000 results). | |||
== Trivia removed == | |||
- Previously, the topic of the large dynamic range of floating-point numbers was mentioned twice separately; these mentions have been merged into a single paragraph. | |||
I removed about that the full precision of extended precision is attained when extended precision is used. The point about the algorithm is it converges using the precision used. We don't need to put in the precisions of single double and extended precision versions of the algorithm. ] (]) 23:23, 23 February 2012 (UTC) | |||
- The links for examples of magnitude are changed to point to the actual examples mentioned (galactic distances and atomic distances). | |||
:::I disagree that it is trivia-- it is a good example to also illustrate the earlier discussions on the usage of extended precision. In any case, to make it easier to find for those who may be interested in the information: the footnote to the final example, giving the precision using double extended for internal calculations, is included here- | |||
Feel free to discuss here. | |||
:::"As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision. Footnote: if intermediate calculations are carried at a higher precision using double extended (x87 80 bit) format, it reaches 18 digits of precision, which is the full target double precision." ] (]) 23:37, 23 February 2012 (UTC) | |||
— ] (]) 23:31, 12 October 2022 (UTC) | |||
== Schubfach is not WP:OR == | |||
:It just has nothing to do with extended precision. The first algorithm would go wrong just as badly with extended precision and the second one behaves exactly like double. There is nothing of note here. Why should it have all the various precisons in? The same thing would happen with float or quad precision. All it says is that the precision for different orecisions is different. Also a double cannot hold 18 digits of precision, used as an intermediate for double you'd at most get one bit of precision extra. ] (]) 00:50, 25 February 2012 (UTC) | |||
I'm not quite sure why some of you consider Schubfach as WP:OR. Several implementations have been around for several years already, in particular it has been already adopted to Nim's standard library a year ago and working fine. It's true that the article is not formally reviewed, but honestly being published in a peer-reviewed conference/journal does not necessarily give that much of credit in this case. For example, one of the core parts (minmax Euclid algorithm) of the paper on Ryu contains a serious error, and this has been pointed out by several people, including Nazedin (a core contributor to Schubfach) if I recall correctly. | |||
::::Agreed that the footnote does nothing to clarify the particular point being made by that example-- that wasn't the aim though. The intention was to also utilise the example to demonstrate the utility of computing intermediate values to higher precision than needed by the final destination format to limit the effects of round-off. In that sense it is an example for the earlier discussion on extended precision (and also the section of approaches to improve accuracy). Perhaps the text "Footnote: if intermediate calculations are carried at a higher precision using double extended (x87 80 bit) format, it reaches 18 digits of precision, which is the full target double precision (see discussion on extended precision above)." would be clearer. Agreed it is is not the most striking example of this, but still demonstrates the idea-- perhaps a separate, more striking and specific example would be preferable, I will see what I can find. ] (]) 04:52, 25 February 2012 (UTC) | |||
The main reason why Schubfach paper has not been published in a peer-reviewed journal, as far as I remember, is not because the work has not been verified, rather simply because the author didn't feel any benefit of going through all the paper works for journal publishing (things like fitting into the artificial page limit). The reason why it is still not accepted in OpenJDK (is it? even if it's not merged yet, it will make it soon) is probably because of lack of human resource who can and are willing to review the algorithm, and submitting the paper to a journal does not magically create such a human resource. (Of course they will do some amount of review, but it is very very far from being perfect, which is why things like the errors in the Ryu paper have not been caught in the review process.) | |||
:::::It does not illustrate that. What give you the idea it does? If anything it is an argument against what was said before. Using extended precision in the intermediate calculation and storing back as double does not give increased precision in the final result. The 18 digits only applies to the extended precision, it does not apply to the double result. The 18 digits is not the target precision of a double. A double can only hold 15 digits accurately. There is no way to stick the extra precision of the extended precision into the target double. ] (]) 09:53, 25 February 2012 (UTC) | |||
The point is, Schubfach as an algorithm has already been completed a long time ago, like in 2017 as far as I believe, and at least two implementations (one in Java and one in C++) have been around at least since 2019, and the C++ one has been adopted to the standard library of a fairly popular language (Nim), and you can even find several more places where it has been adopted (Roblox, a very popular game in US, for example). So what really is a difference from Ryu? The only difference I can tell is that Ryu has a peer-reviewed journal paper, but as I elaborated, that isn't that big difference as far as I can tell. You also mentioned about new versions of the paper, and I felt like as if you think Schubfach is sort of a WIP project. If that's the case, then no, the new versions are just minor fixes/more clarifications rather than big overhauls. If Ryu paper were not published in a journal, probably the author of Ryu would have done the same kind of revisions (and fixed the error mentioned). | |||
::::::IEEE 754 double precision gives from 15 to 17 decimal digits of precision (17 digits if round-tripping from double to text back to double). When the example is computed with extended precision it gives 17 decimal digits of precision, so if the returned double was to be used for further computation it would have less roundoff error, in ULP (at least one extra decimal digit worth). Although, as you say, if the double result is printed to 15 decimal digits this extra precision will be lost. I agree that it is not a compelling example-- a better example could show a difference in many decimal significant digits due to internal extended precision. ] (]) 23:21, 25 February 2012 (UTC) | |||
:::::::The 17 digits for a round trip is only needed to cope with making certain that rounding works okay. The actual precision is just less than 16 digits, about 15.95 if one cranks the figures. Printing has nothing to do with it. I was just talking about the 53 bits of precision information held within double precision format expressed as decimal digits. You can't shove any more information into the bits. The value there is about 1 ulp out and using extended precision would gain that back. This is what I was saying about extended precision being very useful for getting accurate maths functions, straightforward implementations in double will very often be 1 ulp out without special work whereas the extended precision result will very often give the value given by rounding the exact value. ] (]) 00:08, 26 February 2012 (UTC) | |||
::::::::::Ideally, what should be added is a more striking example of using excess precision in intermediate computations to protect against numerical instability. The current one can indeed demonstrate this if excess precision is carried to IEEE quad precision, in which case the numerical unstable version gives good results. I have added notes to that effect which will do as an example for now. There are many examples also showing this using only double extended (e.g. even as simple as computing the roots of a quadratic equation), and I will add such an example in the future.. but not for a while (by the way, I think double extended adds more than 1 ULP but I haven't checked that). ] (]) 06:54, 26 February 2012 (UTC) | |||
:::::::::::That's not true either because how does one know when to stop? Using quadruple precision would still diverge. ] (]) 11:45, 26 February 2012 (UTC) | |||
::::::::::::::::Yes that is so- once it does reach the correct value it stays there for several iterations (at double precision) but does eventually diverge from it again, so a stopping criterion of when the value does not change at double precision could be used. But yes, I am not completely happy with that example for that reason-- feel free to remove it if you feel it is misleading. Actually Kahan has several very compelling examples in his notes-- I will post one here in the next week or so. ] (]) 14:41, 26 February 2012 (UTC) | |||
In summary, I think at this point Schubfach is definitely an established work which has no less credibility compared to Ryu and others. ] (]) 01:09, 10 November 2022 (UTC) | |||
The use of extra precision can be illustrated easily using differentiation. If the result is to be single precision then using double precision for all the calculations is a good idea because of th loss of significance when subtracting two values of he function. ] (]) 12:00, 26 February 2012 (UTC) | |||
:In the mean time, I've learned by e-mail that the paper got a (possibly informal) review by serious people. So, OK to re-add it, but it is important to give references showing that it is used. And please, give the latest version of the paper and avoid typos in the WP text. And instead of "Apparently", try to give facts (i.e., what is really meant by "apparently"). Thanks. — ] (]) 01:26, 10 November 2022 (UTC) | |||
::: ok yes, that could be a good example-- I will see what I can come up with. ] (]) 14:41, 26 February 2012 (UTC) | |||
== Digits of precision, a confusing early statement == | |||
== 01010111 01101000 01100001 01110100 00101110 00101110 00101110 00111111 (What...?) == | |||
I have removed the portion after the ellipses from the following text formerly found in the article: | |||
The section on internal representation does not explain how decimals are converted to floating-point values. I think it will be helpful if we add a step-by-step procedure that the computer follows. Thanks! ] (]) 02:16, 25 February 2012 (UTC) | |||
"12.345 is a floating-point number in a base-ten representation with five digits of precision...However, 12.345 is not a floating-point number with five base-ten digits of precision." I recognize the distinction made (a number with 5 base-ten digits of precision vs. a base-ten representation of a number with five digits of precision) and I suspect the author intended to observe that a binary representation of 12.345 would not have five base-ten digits of precision, but I can't divine what useful thing is intended to have been communicated there, so I've removed it. If I'm missing something obvious in the interpretation of this line, I suspect many others could, and encourage a more direct explanation if it's replaced. ] (]) 18:44, 24 July 2023 (UTC) | |||
:This gives an example of conversion and the articles on the particular formats give other examples. Misplaced Pages does not in general provide step by step procedures, it describes things, see ]. ] (]) 02:24, 25 February 2012 (UTC) | |||
::I just thought it was kind of unclear. Besides, doing so might actually help this article get to GA status. | |||
::You see, I'm trying to design an algorithm for getting the mantissa, the exponent, and the sign of a <code>float</code> or <code>double</code>. So in case anyone else actually cares about that stuff. For the record, the storage is little-endian, so you have to reverse the bit order. ] (]) 02:50, 25 February 2012 (UTC) | |||
:::It would stop FA status. Have a look at the articles about the individual formats. They describe in quite enough details the format. Any particular algorithm is up to the user, they are not interesting or discussed in secondary sources. ] (]) 10:01, 25 February 2012 (UTC) | |||
:::The closest in Misplaced Pages for the sort of stuff you're talking about is if somebody wrote something for wikibooks. Have you had a look at the various external sites? Really to me what you're talking about sounds like some homework exercise and we shouldn't help with those except perhaps to give hints. ] (]) 10:20, 25 February 2012 (UTC) | |||
:The sentence was made nonsensical by this revision by someone who mistook 12.3456 for a typo rather than a counterexample: https://en.wikipedia.org/search/?title=Floating-point_arithmetic&diff=prev&oldid=1166821013 | |||
== imho, "real numbers" is didactically misleading == | |||
:I have reverted the changes, and added a little more verbiage to emphasize that 12.3456 is a counterexample. ] (]) 20:56, 24 July 2023 (UTC) | |||
== Computable reals == | |||
I'd like to propose to change the beginning of the first sentence, because the limited amount of bits in the significand only allows for storing rational binary numbers. Because two is a prime factor of ten, this means only rational decimal numbers can be stored as well. Concluding, I'd like to propose to replace "real" by "rational" there. | |||
] (]) 13:17, 25 February 2012 (UTC) | |||
Concerning ], I want to thank ] for fast response. I agree that mentioning ] is off-topic. However, I still have a strong impression that ] should be listed as a separate bullet. I believe it is different from ]. I mean that arithmetic operations are not “aware” of <math>\pi</math> being <math>\pi</math>. Should I just propose a new edit? ] (]) 20:50, 17 July 2024 (UTC) | |||
:Definitely not. That is a bad idea. They are approximations to real numbers. The concept of rational number just doesn't come into it. That they are rational is just a side effect. ] (]) 14:32, 25 February 2012 (UTC) | |||
:{{Mention|Korektysta}} Yes, but then, the first sentence of the section (before the list) should avoid the term "representing". It should rather talk about the arithmetic (which is some kind of representation and a way of working with it). BTW, I think that the list of alternatives to floating-point numbers should come later in the article, not in the first section. — ] (]) 11:51, 20 July 2024 (UTC) | |||
:In the section 'Some other computer representations for non-integral numbers' there are some systems that can represent some irrational numbers. for instance a logarithmic system does not necessarily represent rational numbers. ] (]) 14:36, 25 February 2012 (UTC) | |||
::I had to think for a moment, but I still believe that ] constitute a separate representation. As far as I remember, the CoRN library does not remember the computation tree, but real numbers are represented as functions. | |||
::I agree that ] could be moved. For example, from ] to the end of the article, just before ] as a separate section. ] (]) 22:56, 1 August 2024 (UTC) | |||
::Ah, OK. Effectively, the arithmetic builds the computation tree, but it is opaque for the user. I guess that the treatment of leafs in the tree is also different because there is no special constant for <math>\pi</math>. <math>\pi</math> is just another function. ] (]) 04:49, 2 August 2024 (UTC) |
Latest revision as of 10:49, 17 November 2024
This is the talk page for discussing improvements to the Floating-point arithmetic article. This is not a forum for general discussion of the article's subject. |
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2, 3, 4, 5Auto-archiving period: 3 months |
This level-5 vital article is rated B-class on Misplaced Pages's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||
|
Archives | |||||
|
|||||
This page has archives. Sections older than 90 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
Lead section edits
I edited the lead section to try to tidy it up in the following ways:
- Previously the opening sentence was "In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision." I found this opaque (what is "arithmetic using formulaic representation"?) and oblique (it doesn't tell you what a floating-point number is, it only talks about an attempted "trade-off"). I think Misplaced Pages articles should open by defining the thing at hand directly, rather than talking around it. Therefore, the new opening sentence explicitly describes floating-point representation: "In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precisison, called the mantissa, scaled by an integer exponent of a fixed base."
- Both "significand" and "mantissa" are used to describe the non-exponent part of a floating-point number, but "mantissa" is far more common, so I think it's the better choice. (Google: "floating-point mantissa" yields 672,000 results; "floating-point significand" yields 136,000 results).
- Previously, the topic of the large dynamic range of floating-point numbers was mentioned twice separately; these mentions have been merged into a single paragraph.
- The links for examples of magnitude are changed to point to the actual examples mentioned (galactic distances and atomic distances).
Feel free to discuss here. — Ka-Ping Yee (talk) 23:31, 12 October 2022 (UTC)
Schubfach is not WP:OR
I'm not quite sure why some of you consider Schubfach as WP:OR. Several implementations have been around for several years already, in particular it has been already adopted to Nim's standard library a year ago and working fine. It's true that the article is not formally reviewed, but honestly being published in a peer-reviewed conference/journal does not necessarily give that much of credit in this case. For example, one of the core parts (minmax Euclid algorithm) of the paper on Ryu contains a serious error, and this has been pointed out by several people, including Nazedin (a core contributor to Schubfach) if I recall correctly.
The main reason why Schubfach paper has not been published in a peer-reviewed journal, as far as I remember, is not because the work has not been verified, rather simply because the author didn't feel any benefit of going through all the paper works for journal publishing (things like fitting into the artificial page limit). The reason why it is still not accepted in OpenJDK (is it? even if it's not merged yet, it will make it soon) is probably because of lack of human resource who can and are willing to review the algorithm, and submitting the paper to a journal does not magically create such a human resource. (Of course they will do some amount of review, but it is very very far from being perfect, which is why things like the errors in the Ryu paper have not been caught in the review process.)
The point is, Schubfach as an algorithm has already been completed a long time ago, like in 2017 as far as I believe, and at least two implementations (one in Java and one in C++) have been around at least since 2019, and the C++ one has been adopted to the standard library of a fairly popular language (Nim), and you can even find several more places where it has been adopted (Roblox, a very popular game in US, for example). So what really is a difference from Ryu? The only difference I can tell is that Ryu has a peer-reviewed journal paper, but as I elaborated, that isn't that big difference as far as I can tell. You also mentioned about new versions of the paper, and I felt like as if you think Schubfach is sort of a WIP project. If that's the case, then no, the new versions are just minor fixes/more clarifications rather than big overhauls. If Ryu paper were not published in a journal, probably the author of Ryu would have done the same kind of revisions (and fixed the error mentioned).
In summary, I think at this point Schubfach is definitely an established work which has no less credibility compared to Ryu and others. 2600:1700:7C0A:1800:24DF:1B93:6E37:99D2 (talk) 01:09, 10 November 2022 (UTC)
- In the mean time, I've learned by e-mail that the paper got a (possibly informal) review by serious people. So, OK to re-add it, but it is important to give references showing that it is used. And please, give the latest version of the paper and avoid typos in the WP text. And instead of "Apparently", try to give facts (i.e., what is really meant by "apparently"). Thanks. — Vincent Lefèvre (talk) 01:26, 10 November 2022 (UTC)
Digits of precision, a confusing early statement
I have removed the portion after the ellipses from the following text formerly found in the article: "12.345 is a floating-point number in a base-ten representation with five digits of precision...However, 12.345 is not a floating-point number with five base-ten digits of precision." I recognize the distinction made (a number with 5 base-ten digits of precision vs. a base-ten representation of a number with five digits of precision) and I suspect the author intended to observe that a binary representation of 12.345 would not have five base-ten digits of precision, but I can't divine what useful thing is intended to have been communicated there, so I've removed it. If I'm missing something obvious in the interpretation of this line, I suspect many others could, and encourage a more direct explanation if it's replaced. john factorial (talk) 18:44, 24 July 2023 (UTC)
- The sentence was made nonsensical by this revision by someone who mistook 12.3456 for a typo rather than a counterexample: https://en.wikipedia.org/search/?title=Floating-point_arithmetic&diff=prev&oldid=1166821013
- I have reverted the changes, and added a little more verbiage to emphasize that 12.3456 is a counterexample. Taylor Riastradh Campbell (talk) 20:56, 24 July 2023 (UTC)
Computable reals
Concerning Special:Diff/1234874429, I want to thank User:Vincent_Lefèvre for fast response. I agree that mentioning real closed field is off-topic. However, I still have a strong impression that computable reals should be listed as a separate bullet. I believe it is different from symbolic computation. I mean that arithmetic operations are not “aware” of being . Should I just propose a new edit? Korektysta (talk) 20:50, 17 July 2024 (UTC)
- @Korektysta: Yes, but then, the first sentence of the section (before the list) should avoid the term "representing". It should rather talk about the arithmetic (which is some kind of representation and a way of working with it). BTW, I think that the list of alternatives to floating-point numbers should come later in the article, not in the first section. — Vincent Lefèvre (talk) 11:51, 20 July 2024 (UTC)
- I had to think for a moment, but I still believe that computable reals constitute a separate representation. As far as I remember, the CoRN library does not remember the computation tree, but real numbers are represented as functions.
- I agree that the subsection could be moved. For example, from the overview to the end of the article, just before See also as a separate section. Korektysta (talk) 22:56, 1 August 2024 (UTC)
- Ah, OK. Effectively, the arithmetic builds the computation tree, but it is opaque for the user. I guess that the treatment of leafs in the tree is also different because there is no special constant for . is just another function. Korektysta (talk) 04:49, 2 August 2024 (UTC)
- B-Class level-5 vital articles
- Misplaced Pages level-5 vital articles in Technology
- B-Class vital articles in Technology
- B-Class Computing articles
- Top-importance Computing articles
- B-Class Computer science articles
- Top-importance Computer science articles
- All Computing articles
- WikiProject Computer science articles