From:           nick@phoenix.Princeton.EDU (Nicolau C Saldanha)
Newsgroups:     sci.math
Subject:        Re: sin(x), x a surreal number.
Date:           22 Apr 88 18:33:48 GMT
Organization:   Princeton University, NJ

In article <484@osupyr.mast.ohio-state.edu> gae@osupyr.mast.ohio-state.edu.UUCP (Gerald Edgar) writes:
>In article <Apr.20.18.10.39.1988.20573@topaz.rutgers.edu> clong@topaz.rutgers.edu (Chris Long) writes:
>>
>>How would you define sin(x), x a surreal number?  What is sin(omega)?
>>What is sin(epsilon)?
>>
>In "On Numbers and Games", page 43, Conway discusses this.  His method will
>define  sin(x)  as long as |x| < some finite integer.  He adds:
>"there does not seem to be any reasonable extension enabling us to define
>any of these functions outside this 'bounded' region."
>
>On the other hand, his dismissing of the possibility of a "limit" on the
>same page is nonsense.  The usual "epsilon-delta" definition, where now
>epsilon and delta are positive surreals makes perfectly good sense.
>In order for a definition of sin(x) to be reasonable, it should be
>twice differentiable (using these limits to define "differentiable") and equal
>to the negative of its second derivative, and agree with the usual sin(x) for
>real x.  Is there such a function?  Is it unique?
>

Actually, there is a ``reasonable'' extention of all elementary
functions for all surreal and surcomplexes ( surcomplexes are defined
just by adjoining i = sqrt(-1) ) .

This comes from a ``reasonable'' definition of integrating a
differential equation, the trouble being there is no general nice
theory for this, but it works fine in the relevant particular cases.

In this way, we get definitions of log, exp, sin, etc.
These definitions will be recursive, in the same spirit as the
definitions of multiplication, division and the like.
As some examples:

exp(\omega) = (\omega)^(\omega)

( The first exp being the ``analytic'' exp, and the right hand side
is the Cantor-ordinal-style definition of (\omega)^x )

exp(i* \omega) = 1

and therefore

sin(\omega) = 0

As to the idea of bringing derivatives into the game, it doesn't really
work, roughly because the class of surreals is ``disconnected'',
unlike the reals. This causes the existence of ``gaps''.
The following function is therefore ``C\infnty'':

f(x) = 0 , for x < (gap); f(x) = 1 , for x > (gap)

The epsilon-delta definition of limit is of course consistent, but not
very interesting.  Your idea of how to extend sine is *very*
``non-unique''.

By the way, I learned this stuff from Conway ( the original creator of
this theory ).  His book ``Games and Numbers'' is an excellent introduction,
but this material is not covered there nor do I know any references.

As a last remark for those ignorant about what surreal numbers are,
they are a Big extention of the real numbers, including infinities
of several sizes ( all ordinals and cardinals are there ).

-- 
                    Nicolau Corcao Saldanha

From:           nick@phoenix.Princeton.EDU (Nicolau C Saldanha)
Newsgroups:     sci.math
Subject:        Surreal numbers.
Summary:        About exp, sin, log, etc.
Date:           27 Apr 88 17:44:53 GMT
Organization:   Princeton University, NJ

>> Actually, there is a ``reasonable'' extention of all elementary
>> functions for all surreal and surcomplexes ( surcomplexes are defined
>> just by adjoining i = sqrt(-1) ) .
>
>> exp(i* \omega) = 1
>> and therefore
>> sin(\omega) = 0
>
>I'd like to see this "reasonable" definition; it looks to me like
>you are either implicitly or explicitly bringing some form of summation
>in (C_1 sum, perhaps?).  I personally think no "reasonable" extension
>of sine to the non-finite surreals exists; however, my definition
>of what is "reasonable" does not appear (or have to) agree with yours.

Unfortunately, I know of no written reference to this material.
I will do my best to reproduce here what I learned from Conway.

But first some warnings:

- There will be NO talk about continuity or differentiability of 
  surreal functions.  In fact, this approach seems inadequate to
  me, due to the existence of gaps in the ``surreal line''.

- Notice that series can NOT be added using epsilon-delta deffinitions
  since that would make all non-trivial series diverge by falling
  into a gap.  For instance, we would not have 1/2 + 1/4 + 1/8 +... = 1,
  since 1 - 1/omega is still larger than any partial sum.
  When we write such things as 1 + omega^(-1) + omega^(-2) + ...
  as the Cantor-Conway normal form of a surreal number, this ``series''
  is NOT to be understood in the epsilon-delta sense, lest we get a
  divergent series.

  ( By the way, 0.999... = 1 for the surreals ( since it holds for the
    reals ) in the ``only'' meaningful sense the expressions have;
    again, epsilon-delta definitions are out. )

- This does not use explicitly any form of summation like Cesaro's;
  nor implicitly, as far as I can tell ( I could be wrong ).  

- This DOES use the technique of defining things inductively, defining
  x by means of x_L and x_R.  This follows the same spirit as the
  definitions of x + y, x * y, 1/x and the like.

And now a most crucial warning:

- The ( usual ) extentional notion of a function is NOT adequate here.
  Two functions assuming the same values at each and every surreal number
  must be considered DIFFERENT if the left and right options are not the
  same.  As an example, the functions:

        f(x) = {|};

  and

        g(x) = { g(x_L) - 1 | g(x_R) + 1 };

  are both constant equal zero.  Usually, we would say that they are the
  same function  ---  this is the extentional notion of what a function
  is: a function is known if its extention, i.e., the values it assumes,
  are known.  Here, however, we have to adopt a radically different
  point of view: in order to know a function, we have to know how it is
  defined, and different definitions give different functions even if
  the values coincide.  In this sense, f and g are DIFFERENT.

  An other way of looking at the situation, less radical but less
  satisfactory, is to think of functions as having ``good'' and ``bad''
  definitions.  The above definition of f is probably ``good'', but
  g is almost certanly ``bad''.

  This is means we have to be careful about certain things we usually
  take for granted.  For instance, in a minute we will be defining log
  as the integral of 1/x.  In order to do this, the following definition
  of 1/x is not satisfactory:

        1/x = y     iff     xy = 1;

  We now need a recursive, ``constructive'' definition of the form:

        1/x = { ( stuff depending on x, x_L and x_R ) |
            ( other stuff ) };

  such a definition is possible ( but not very easy ) to obtain.

  Another danger: once we have log, we can't really invert it to get
  exp: that would give us the values of exp, but not a definition we
  could use later if we want to use exp to get new functions.

Now for the real action. We will adopt the convention:

    f(x) = { f_L(x,x_L,x_R) | f_R(x,x_L,x_R) };

What follows is a definition of integration.

$
        \int_a^b f(t) dt = 

    {
        \int_a^{b_L} f(t) dt + \intd_{b_L}^b {f_L}(t) dt ,      
        \int_a^{b_R} f(t) dt + \intd_{b_R}^b {f_R}(t) dt ,      
        \int_{a_R}^b f(t) dt + \intd_a^{a_R} {f_L}(t) dt ,      
        \int_{a_L}^b f(t) dt + \intd_a^{a_L} {f_R}(t) dt 
    |       
        \int_a^{b_L} f(t) dt + \intd_{b_L}^b {f_R}(t) dt ,      
        \int_a^{b_R} f(t) dt + \intd_{b_R}^b {f_L}(t) dt ,      
        \int_{a_R}^b f(t) dt + \intd_a^{a_R} {f_R}(t) dt ,      
        \int_{a_L}^b f(t) dt + \intd_a^{a_L} {f_L}(t) dt 
    }
$

I used \TEX notation for integrals, subscripts and superscripts.
``\intd'' should be written as an integral sign with a capital `D'
over it, in the middle.  It means direct integration, which means
do not chop the domain into pieces.  Notice that some integrations
in the above definition will go from right to left, which means you
have to do the usual change of signs.

So now you know what log is! You define:

    log(x) = \int_1^x 1/t dt;

This definition is good for any positive surreal number and satisfies
all the usual properties.

A similar definition can be found for the ``solution of the differential
equation''

    dy/dt = g(t,y);

( Notice we are not *really* talking about derivatives )

from which definitions of exp , sin and cos can be obtained.

I will write down the definition later if someone wants me to, but I 
think it is obvious if you understand the above definition of
integration.

I am not really sure of this, but I think gamma, bessel, zeta and
other functions can be done about as easily.

Exercise: Prove the Riemann Hypothesis for the surcomplex numbers.
                        ( :-] )

Intuitively, however, it is relatively easy to see how to define
trigonometric functions for all surreals. Just say that TWOPI
is a period in the following surreal sense:

        sin( x + TWOPI*n ) = sin(x);

for any surreal x and any n in the class Oz of surreal integers.

( Definition:

    n is in Oz iff n = { n-1 | n+1 }        )

The above definitions have this property.  Notice that omega/TWOPI
is an integer, and therefore omega is a period.

Very roughly, the reason this works is the following: up to day omega
( exclusive ) you have been working only with finite numbers.  When
the time comes to define cos(omega) and sin(omega) you still have NO
information about their values.  What you do is of course pick the
simplest coherent answer ( this is what you do all the time with
surreal numbers ), and this is of course
cos(omega) = 1 and sin(omega) = 0.

By the same reasoning, 

        cos(omega^r) = 1,
        sin(omega^r) = 0;

for any positive surreal number r.
( This is Cantor's exponentiation, not the analytic one )

By the way, there seems to be someone in Rutgers who is very interested
in this stuff.  This someone may want to talk about this more directly.
Send e-mail or we might talk by telephone sometime.

-- 
                    Nicolau Corcao Saldanha

From:           gae@osupyr.mast.ohio-state.edu (Gerald Edgar)
Newsgroups:     sci.math
Subject:        Re: Surreal Numbers
Date:           27 Apr 88 12:19:20 GMT
Organization:   The Ohio State University, Dept. of Math.

In article <2681@phoenix.Princeton.EDU> nick@phoenix.Princeton.EDU (Nicolau C Saldanha) writes:
>limits are not very good for surreals: the sequence a(n) = 1 - 0.1^n
>does NOT converge to 1 *in the surreals* with the usual epsilon-delta
>definition ( try epsilon equal to any infinitesimal, say 1/omega ).
>
If epsilon is an infinitesimal, of course n must be infinite. The problem
is not with the epsilon-delta definition, if BOTH epsilon and delta are
allowed to be surreal.  For 1-0.1^n, the index set must be a Proper Class
of n's.

So sequences in the usual sense hardly ever converge, but Sequences in the
sense natural for the Surreals converge almost as well as we are used to.
-- 
  Gerald A. Edgar                               TS1871@OHSTVMA.bitnet
  Department of Mathematics                     gae@osupyr.mast.ohio-state.edu
  The Ohio State University  ...{akgua,gatech,ihnp4,ulysses}!cbosgd!osupyr!gae
  Columbus, OH 43210                            70715,1324  CompuServe

From:           jack@cs.glasgow.ac.uk (Mr Jack Campin)
Newsgroups:     sci.math
Subject:        Re: sin(x), x a surreal number.
Date:           26 Apr 88 11:03:37 GMT
Organization:   PISA Project, Glesga Yoonie

nick@phoenix.Princeton.EDU (Nicolau C Saldanha) writes:

>By the way, I learned this stuff from Conway ( the original creator of
>this theory ).  His book ``Games and Numbers'' is an excellent introduction,
                            ^^^^^^^^^^^^^^^^^ "On Numbers and Games", I think
>but this material is not covered there nor do I know any references.

>As a last remark for those ignorant about what surreal numbers are,
>they are a Big extention of the real numbers, including infinities
>of several sizes ( all ordinals and cardinals are there ).

There is a more up-to-date survey in a book by Gonshor: "Introduction to the
theory of surreal numbers", from Cambridge University Press, and recently
reprinted as a reasonably priced paperback.

A final question. Surreal numbers are very pretty but what the #@$! are they
good for?

-- 
ARPA: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk       USENET: jack@cs.glasgow.uucp
JANET:jack@uk.ac.glasgow.cs      useBANGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack
Mail: Jack Campin, Computing Science Dept., Glasgow Univ., 17 Lilybank Gardens,
      Glasgow G12 8QQ, SCOTLAND     work 041 339 8855 x 6045; home 041 556 1878

Date:           Sun, 10 Aug 1997 16:55:17 -0400 (EDT)
From:           Ryan Chauncey Siders <rcsiders@math.princeton.edu>
To:             edgar@math.ohio-state.edu, clong@math.rutgers.edu, jack@cs.ucl.ac.uk, 
                eppstein@paris.ics.uci.edu
Subject:        surreal number posts to Sci.Math in 1988

Hi.  I'm trying to follow up on the discussion that occurred on Sci. math
in the spring of 1988 about functions of surreal numbers.  I'd like to
know: 

    where/how can I learn more about surreal function theory? 
    are you still interested in this problem?
    do you know other people who are/were interested in 
        surreal numbers? 

Nicolau C Saldanha wrote:

>By the way, I learned this stuff from Conway ( the original creator of
>this theory ). 

Conway doesn't remember giving a course on surreals at princeton around 
this time... were these private communications from Conway?  What format 
are they in?  Could I get you to send me a copy of them, Nick? 

Hmm... I'm more interested in catching up on the discussion than 
presenting new ideas, but here's two substantive idea, so that I'll seem 
legitimate:

1. The definition Nick gives for integration is now believed by its 
inventors to be inadequate.  For instance, we want the integral of the 
exponential function, taken over the interval (0,omega) to be 
exp(omega)-exp(0).  There is not yet known an intrinsic definition of
both EXP and INTEGRATION for which this is true.

>                \int_a^b f(t) dt = 
>        {
>               \int_a^{b_L} f(t) dt + \intd_{b_L}^b {f_L}(t) dt 
>               \int_a^{b_R} f(t) dt + \intd_{b_R}^b {f_R}(t) dt 
>               \int_{a_R}^b f(t) dt + \intd_a^{a_R} {f_L}(t) dt 
>               \int_{a_L}^b f(t) dt + \intd_a^{a_L} {f_R}(t) dt 
>        |               
>               \int_a^{b_L} f(t) dt + \intd_{b_L}^b {f_R}(t) dt 
>               \int_a^{b_R} f(t) dt + \intd_{b_R}^b {f_L}(t) dt 
>               \int_{a_R}^b f(t) dt + \intd_a^{a_R} {f_R}(t) dt 
>               \int_{a_L}^b f(t) dt + \intd_a^{a_L} {f_L}(t) dt 
>        }

2.  The issue being contended in http://www.ics.uci.edu/~eppstein/ 
cgt/surreal.html: 

>>> Actually, there is a ``reasonable'' extention of all elementary
>>> functions for all surreals... (Nick)
>>
>>I'd like to see this "reasonable" definition ... (who?)
>...sequences in the usual sense hardly ever converge, but Sequences in the
>sense natural for the Surreals converge almost as well as we are used to/ 
>(Gerald Edgar)

... raises the question of what functions are to be counted as surreal.
If a function has a jump discontinuity at a gap in the surreals, is it
continuous?  E.g.: the function which is 0 for finite numbers, and omega
for infinite numbers.  We want to call this "discontinuous", but why?  
Gerald suggests an epsilon-delta argument which I don't understand.  But,
perhaps we can isolate a few assuredly continuous functions: algebraic 
curves.