Replies: 3 comments 1 reply
-
My forth not use FP at all, all is integer and fixed point, for example I do 3d graphics with this. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I have long considered that the Gustavson "posit" approach would be ideal
for Forth implementation:
https://www.youtube.com/watch?v=RfT92SWDfrs
http://johngustafson.net/pdfs/BeatingFloatingPoint.pdf
https://www.cs.cornell.edu/courses/cs6120/2019fa/blog/posits/
https://posithub.org/docs/PDS/PositEffortsSurvey.html
https://www.youtube.com/watch?v=aP0Y1uAA-2Y
This would be a WONDERFUL step forward for 8 bit and 16 bit forths, never
mind the 32 and 64 bit implementations ... and if you had FPGA
implementations (h/w) .. well ... might be a game changer for ML and other
numerical applications ...
Just sayin' :-)
…On Mon, 10 May 2021, Jeffrey Massung wrote:
Date: Mon, 10 May 2021 10:27:31 -0700
From: Jeffrey Massung ***@***.***>
Reply-To: ForthHub/discussion
***@***.***>
To: ForthHub/discussion ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [ForthHub/discussion] Fixed-point math Forths (#97)
I'm wondering if anyone here is aware of Forth implementations that use fixed-point instead of IEEE floats? Obviously fixed-point isn't exactly difficult
and has been used for a long time historically, but I'm more interested in some specifics of the implementations that have used it.
*
Like the standard, where floats are kept on a separate stack, have these Forths also kept fixed-point values on a separate stack? Obviously with IEEE
floats, the floating-point stack made sense since it was kept on separate hardware. That obviously isn't necessarily for fixed-point operations.
*
How was overflow handled?
If they aren't on a separate stack, then all of the following questions hold:
*
What was done to ensure fixed-point values and integers didn't get mixed w/ +, -, etc.? Nothing (read: user's responsibility)? Tagged values? All
values are considered fixed-point? ...?
*
Were the words kept the same (e.g. f+, f-, etc.) or were they renamed (e.g. fp+, fp-)? Likewise, for + and - there's actually no difference between
integer and fixed-point, so did they actually have separate words or not?
There's likely other questions/considerations I'm not asking b/c I don't know to, so any other info someone has on a system that did that which was
unique, I'd love to know!
?
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or
unsubscribe.[AAS2KEXTENN4SUAPYKU2L4LTNAJQHA5CNFSM44RZIRUKYY3PNVWWK3TUL52HS4DFVJCGS43DOVZXG2LPN2VGG33NNVSW45C7NFSM4ABTIHIQ.gif]
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Robert S. Sciuk ***@***.***
Principal Consultant Phone: 289.312.1278 Cell: 905.706.1354
Control-Q Research 97 Village Rd. Wellesley, ON N0B 2T0
|
Beta Was this translation helpful? Give feedback.
1 reply
-
Good points, but there is a difference between accuracy and correctness.
754 ignores important aspects of failed calculations, and as Gustafson
points out, is not a standard, but a mere "guideline".
As for an earlier POSIT thread, I must have missed it, or I would not have
posted the one to which you responded. Sorry about that. I did not
believe that I was flogging a dead horse :-D
Cheers,
Rob.
…On Mon, 10 May 2021, Alex Shpilkin wrote:
Date: Mon, 10 May 2021 14:12:35 -0700
From: Alex Shpilkin ***@***.***>
Reply-To: ForthHub/discussion
***@***.***>
To: ForthHub/discussion ***@***.***>
Cc: Rob Sciuk ***@***.***>, Mention ***@***.***>
Subject: Re: [ForthHub/discussion] Fixed-point math Forths (#97)
@paraplegic said:
I have long considered that the Gustavson "posit" approach would be ideal for Forth implementation:
[snip]
This would be a WONDERFUL step forward for 8 bit and 16 bit forths, never mind the 32 and 64 bit implementations ... and if you had FPGA
implementations (h/w) .. well ... might be a game changer for ML and other numerical applications ... Just sayin' :-)
I should?ve probably said that in the posit thread earlier, but... I mean, I can buy the Gustafson?s argument that posits behave in less surprising ways than
both fixed- and floating-point numbers when you?re just pretending the computer can do real numbers and hoping for the right answer; and lest I sound
dismissive here, you should absolutely do that if (you are doing little enough computation in large enough precision that) you can afford it.
But once your bits start getting tight so that you have to worry about conditioning and stability... I don?t know. Because a posit?s inherent relative
uncertainty depends on its magnitude (like e.g. IEEE subnormals), working with them would mean you?d essentially have to reanalyze every numerical
algorithm in existence from scratch, using mathematical tools that so far do not exist (and I don?t even remember any analysis for usual floating point
that would cover subnormals?they are usually just dumped into the ?underflow? refuse bucket). I hesitate to say that it?s not possible to develop such tools
(there?s extensive history of people eating prodigious crow after saying things like that), and I?m definitely not saying that it?s not worth it to try; but I
don?t expect posits to be usable in places where the actual number representation matters until numerical analysts (plural) pore over the idea for several
years and try to adapt a fair number of classical algorithms to them.
In the articles I?ve read about posits so far (which I?m not sure cover all the articles you?ve linked to, so do prove me wrong here) I?ve seen a fair amount of
discussion of how they are more accurate or more performant on singular operations, but little exploration (let alone mathematical descriptions) of how
they do on algorithms that do a lot of those operations. Not necessarily ?ber-advanced stuff?just find some eigenvalues, solve a diffusion equation or two,
things like that?but millions of multiplicatons not a couple dozen.
I have to admit that I find the constant relative uncertainty of floating-point numbers quite intuitive (modern correctly-rounded ones, not the mess
documented in Knuth and such) once I actually (am forced to) start thinking about the precision of my computations, and I?d expect so would anybody with a
natural-sciences or engineering bent. That does not in any way mean that it?s simple to program with them?even computing a correctly-rounded sum of many
numbers is almost preposterously difficult. It?s just that I don?t expect posits to be any easier to analyze when the going gets tough: it?s not sufficient to
be fast or precise, you?ve got to be predictably precise.
So, re ML and other numerics, I don?t honestly know what the ML people need; with the amount of data they?re pushing around, they probably want usable linear
algebra for huge matrices, but I don?t know how much they care about precision. (After the crazy thing where floating-point inaccuracies were actually
helpful in that they introduced necessary nonlinearities into ostensibly completely linear computations, I don?t know what to think.) On the other hand,
the scientific numerics people very much care about precision, because scaling their computations until either the precision or the computational power
runs out is what they do; and so far posits don?t seem to be helpful there.
As for the overwhelming majority of people who don?t saturate their computation capability, maybe posits do help, I don?t know. Are they somehow more
amenable to software or hardware implementations?
?
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or
unsubscribe.[AAS2KEV3CBWF4LYMXI4VKPLTNBD4HA5CNFSM44RZIRUKYY3PNVWWK3TUL52HS4DFWFCGS43DOVZXG2LPNZBW63LNMVXHJKTDN5WW2ZLOORPWSZGOAAFP6GA.gif]
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Robert S. Sciuk ***@***.***
Principal Consultant Phone: 289.312.1278 Cell: 905.706.1354
Control-Q Research 97 Village Rd. Wellesley, ON N0B 2T0
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm wondering if anyone here is aware of Forth implementations that use fixed-point instead of IEEE floats? Obviously fixed-point isn't exactly difficult and has been used for a long time historically, but I'm more interested in some specifics of the implementations that have used it.
Like the standard, where floats are kept on a separate stack, have these Forths also kept fixed-point values on a separate stack? Obviously with IEEE floats, the floating-point stack made sense since it was kept on separate hardware. That obviously isn't necessarily for fixed-point operations.
How was overflow handled?
If they aren't on a separate stack, then all of the following questions hold:
What was done to ensure fixed-point values and integers didn't get mixed w/
+
,-
, etc.? Nothing (read: user's responsibility)? Tagged values? All values are considered fixed-point? ...?Were the words kept the same (e.g.
f+
,f-
, etc.) or were they renamed (e.g.fp+
,fp-
)? Likewise, for+
and-
there's actually no difference between integer and fixed-point, so did they actually have separate words or not?There's likely other questions/considerations I'm not asking b/c I don't know to, so any other info someone has on a system that did that which was unique, I'd love to know!
Beta Was this translation helpful? Give feedback.
All reactions