Abstract
Two generalizations U( + )y() = U()y( + ) (, , A) of the functional equation of the mean sun are studied, where (A, +) is an Abelian group, K is a field, n is a positive integer, and both y: A K^{n} and U: A GL(n, k) (or U: A M_{n}(k) in the second case) are unknown functions, which will be determined by the equation. 
Local solar time is measured by a sundial. When the center of the sun is on an observer’s meridian, the observer’s local solar time is zero hours (noon). Because the earth moves with varying speed in its orbit at different times of the year and because the plane of the earth’s equator is inclined to its orbital plane, the length of the solar day is different depending on the time of year. It is more convenient to define time in terms of the average of local solar time. Such time, called mean solar time, may be thought of as being measured relative to an imaginary sun (the mean sun) that lies in the earth’s equatorial plane and about which the earth orbits with constant speed. Every mean solar day is of the same length.^{1}
In [1, 4] it is shown that the mean sun satisfies the functional equation
 (1) 
Then M(, )y(s) is the direction from the earth to the sun expressed in a local coordinate system on the surface of the earth in the point of longitude and latitude .
In the present paper we investigate generalizations of equation (1) for fixed . To be more precise, first we will solve the following functional equation
 (2) 
We will mainly deal with problems of the second and third kind.
In Theorem 6 we describe in an appropriate system of coordinates the structure of the space S_{U } of all solutions of (2) for a given U: A GL(n, K). We also state in this theorem how such a mapping U necessarily looks if a nontrivial solution y (i.e. y0) exists. A similar description of Uinvariant subspaces of S_{U } is given in Theorem 8. We emphasize that by our result (and similarly by the following theorems) the problem of solving (2) can be reduced, at least to some extent, to the problem of finding all exponential functions U_{11}: A GL(k, K) (cf. the representation of U in Theorem 6), i.e. non singular matrices U_{11}() satisfying the equation
Here we assume that these functions are known and we refer the reader to [3].
In Theorem 9 we construct to a given subspace S^{0} of K^{n} the set of all mappings U and correspondingly the space S of all functions y, such that (U, y) satisfies (2) and S^{0} is exactly the set of all initial values y(0) for y S. It is clear that this yields together with Theorem 6 an implicit description of the set of all solutions (U, y) of (2) by varying the subspace S^{0} of K^{n}. However, the space S and the mapping U obtained in this way from S^{0} may have the property that S is a proper subset of S_{U }. Therefore we also deal with the problem to characterize the situation when S = S_{U }.
From a mathematical point of view it seems also interesting to study the functional equation (2) for mappings U: A M_{n}(K). This situation is more complicated both with respect to the technical details and the construction (description) of the solutions U or y or (U, y). In Theorem 20 we start from a given mapping U: A M_{n}(K) and describe completely in appropriate coordinates the set of all functions y: A K^{n}, such that (U, y) is a solution of (2). Again this theorem provides necessary conditions on U for the existence of nontrivial solutions y of (2). We also show in Theorem 21 how to construct all mappings U: A M_{n}(K) and corresponding spaces S of functions y: A K^{n}, such that is a given subspace S^{0} of K^{n} and (U, y) is a solution of (2), hence giving an implicit description of the general solution of (2) by varying S^{0}. However, we were not able to contribute to the problem when S = S_{U }.
The main difficulties in this last part seem to arise from the fact that there can occur solutions y (to a given U) with y(0) = 0 but y0 (cf. Lemma 18).
Here in this part we always assume that U is a mapping from the abelian group A to GL(n, K).
Lemma 1. Let B, C be matrices in GL(n, k). Then (U, y) is a solution of (2) if and only if (V, By) is a solution of (2), where V () = CU()B^{1}.
For C = U(0)^{1} we get CU(0) = I n, the identity matrix. Hence, without loss of generality we will always assume that U(0) = In.
It is also possible to reverse the statement of Lemma 2.
Lemma 3. Assume U(0) = In and let y be given by (3). If (U, y) satisfies (4), then (U, y) is a solution of (2).
For any mapping U: A GL(n, k) let
Some basic properties of these two sets are collected in the following
Lemma 4. Both S_{U } and S_{U}^{0} are Klinear spaces and : S_{U}^{0} S_{U }, given by (y^{0}) := U(^{.})y^{0}, is a vector space isomorphism.
In conclusion, both S_{U } and S_{U}^{0} are mdimensional linear spaces for some 0 < m < n.
There are some more interesting properties of S_{U } and S_{U}^{0}.
If denotes a basis of S_{U}^{0}, then there exists a matrix B GL(n, k), such that Bb_{i} = e_{i}, the ith unit vector in K^{n}. Applying this matrix B as a coordinate transformation on K^{n} as in Lemma 1 we get that S_{UB1}^{  1} = <e_{ 1}, ..., e_{m}>, the mdimensional linear space generated by the first m unit vectors in K^{n}. Thus without loss of generality we may assume that S_{U } = <e_{1}, ..., e_{m}>.
Theorem 6. Let U: A GL(n, K) be a mapping, such that S_{U}^{0} = <e_{1}, ..., e_{m}> and U(0) = In. Then U() can be partitioned as a block matrix of the form
where U_{11}() GL(m, K), U_{22}() GL(n  m, K) and U_{12}() M_{m,nm}(K). These matrices satisfy the boundary conditions U_{11}(0) = Im, U_{22}(0) = Inm and U_{12}(0) = (0)m,nm, the zero matrix. Moreover, U_{11} is an exponential function, i.e. U_{11}( + ) = U_{11}()U_{11}() for all , A.
Each y S_{U } can be expressed as
and (0) K^{m}.
 (5) 
which means that U_{11}( + ) (0) = U_{11}()U_{11}() (0) for all (0) K^{m} and , A, so that U_{11} is an exponential function.
We are also interested in subspaces of S_{U }. First we present a generalization of Lemma 5.
Lemma 7. Let S be a subspace of S_{U }. Then the following statements are equivalent:
To each y^{0} S^{0} there exists y S, such that y^{0} = y(0). Under the assumption 2, the function z() := y( + _{0}) belongs to S for any _{0} A. So z(0) S^{0} and z(0) = y(_{0}) = U(_{0})y(0) = U(_{0})y^{0} by (3). Thus we proved that 2 implies 3.
In order to close the cycle of implications take y S. Then y(0) S^{0}. For arbitrary _{0} A also U(_{0})y(0) belongs to S^{0}. Hence, there exists z S such that z(0) = U(_{0})y(0). Taking into account that S is a subspace of S_{U } we can write z as z() = U()z(0) = U()U(_{0})y(0) = U( + _{0})y(0) = U(_{0})U()y(0) = U(_{0})y() by (3), (4), (4) and (3). Thus U(_{0})y S.
A generalization of Theorem 6 is
Theorem 8. Let S be a kdimensional Uinvariant subspace of S_{U }. Then there exist coordinates in K^{n}, such that S^{0} = <e_{ 1}, ..., e_{k}>, and U() is a block matrix of the form
 (6) 
for K^{k}.
So far we described solutions (U, y) of (2) when the mapping U was given. Now we will assume that a linear subspace S^{0} of K^{n} is given and we describe all solutions (U, y) of (2), such that S_{U}^{0} = S^{0}. Let S^{0} be a kdimensional Uinvariant subspace of K^{n}, then without loss of generality S^{0} = <e_{ 1}, ..., e_{k}>.
Theorem 9. Let S^{0} = <e_{ 1}, ..., e_{k}> be a subspace of K^{n}, and let U_{ 11}() GL(k, K), U_{22}() GL(n  k, K) and U_{12}() M_{k,nk}(K), such that U_{11}(0) = Ik, U_{22}(0) = Ink, U_{12}(0) = (0)k,nk. Moreover U_{11} is assumed to be an exponential function. Then
is a Uinvariant subspace of S_{U }, where U is given by (6).
When does S = S_{U } hold?
Lemma 10. The two spaces S and S_{U } coincide if and only if for all K^{nk} \ there exists (_{0}, _{0}) A^{2}, such that
 (7) 
which finishes the proof.
Now we are going to present several examples for the situation S = S_{U }, i.e. by Lemma 10 examples, where condition (7) is satisfied. Here we always assume that A = K. First we will deal with the second line of condition (7). Secondly, if this condition is not satisfied by all , then let V denote the set
Thus V is an rdimensional subspace of K^{nk} for 0 < r < n  k. In order to satisfy the requirements of Lemma 10 in this situation as well, the first line in (7) must be satisfied for V .
Now we describe some examples how to construct U_{22}: K GL(s, K) for s < n  k, such that
 (8) 
Now we describe examples how to construct U_{12}: K M_{k,nk}(K), such that
 (9) 
where the upper part is Ir and the lower part is a 0matrix of the dimension (k  r) × r. Then for _{0} = _{0} = 1 we get that U_{11}(1)U_{12}(1) + U_{12}(1)U_{22}(1)  U_{12}(1 + 1) = U_{12}(2) and it is obvious that U_{12}(2)0 for all K^{r} \ .
For k < r one possible way to proceed is indicated in
Lemma 11. If there are enough elements in K, to be more precise, if > 2 + 1, then it is always possible to find _{0} and _{0} satisfying (9).
If q > 1 and is big enough, then there exists _{2} K \ and we assume that
Going on like this we can find elements _{1}, ..., _{q} K and matrices U_{12}(±_{i}). If s > 0 and is big enough, then there exists _{q+1} K \ and we assume that
When K^{r} \ , then there exists 1 < i < r, such that _{ i}0. Hence, there exists j , such that (j  1)k < i < jk. For _{0} = _{j} and _{0} = _{j} we have U_{11}(_{j})U_{12}(_{j}) + U_{12}(_{j})U_{22}(_{j})  U_{12}(0) = 2U_{12}(_{j})U_{22}(_{j}). According to the choice of i and j it is clear that (9) is satisfied.
This is a very general result, but it is not the best result which is possible.
then for _{0} = 1 and _{0} = we get U_{11}(1)U_{12}() + U_{12}(1)U_{22}()  U_{12}( + 1) = U_{12}( + 1) and (9) is satisfied. For k < r the following lemma holds:
Lemma 13. If there are enough elements in K, to be more precise, if > 2 + 2, then it is always possible to find _{0} and _{0} satisfying (9).
If s > 0 and is big enough, then there exists _{q+1} K, such that _{q+1}, _{q+1} + 1 K \ and we assume that
Given K^{r} \ there exists 1 < i < r, such that _{ i}0. Hence, there exists j , such that (j  1)k < i < jk. For _{0} = _{j} and _{0} = 1 we have U_{11}(_{j})U_{12}(1) + U_{12}(_{j})U_{22}(1)  U_{12}(_{j} + 1) = U_{12}(_{j} + 1). According to the choice of i and j it is clear that (9) is satisfied.
then (9) is satisfied. For r = 2 assume that U_{11}(1) is given as above and
then again (9) is satisfied. For k = r = 3 and for any choice of U_{11}(1), U_{22}(1) GL(3, K) of order dividing 2 the computer did not find a matrix U_{12}(1) in M_{3}(K) such that (9) is satisfied. Other cases were not studied so far.
If k > r it is not possible to satisfy (9), since there is only one possible choice _{0} = _{0} = 1, which determines exactly one matrix U_{11}(1)U_{12}(1) + U_{12}(1)U_{22}(1). This matrix describes a homomorphism from K^{k} to K^{r}, which has a kernel of dimension > k  r > 0.
In this part we generalize the functional equation (2) by assuming that U() is not necessarily a regular matrix, i.e. U: A M_{n}(K). Also in this situation Lemma 1 holds. When we define S_{U } and S_{U}^{0} as it was done earlier, then S_{U } and S_{U}^{0} are Klinear spaces (cf. Lemma 4). Again S_{U}^{0} is an mdimensional subspace of K^{n} for 0 < m < n, and S_{ U } is invariant under translations, and S_{U}^{0} = (cf. Lemma 5). Without loss of generality we can assume (as in the earlier case) that there exists a basis of K^{n}, such that S_{U}^{0} = <e_{1}, ..., e_{m}>.
Since U(0) need not be a regular matrix, we do not get the results of Lemma 2, and in general there is no isomorphism between S_{U } and S_{U}^{0}.
For = = 0 or = 0 we derive from (2)
If U() is partitioned as in (5) and y() is written as for () K^{m}, then from (10) we get
which leads to the system of equations
 (12) 
Lemma 15. Let (U, y) be a solution of (2). Then there exists a system of coordinates of K^{n}, such that
 (13) 
 (14) 
and
Without loss of generality assume that U = V . From the second line of (12) we deduce that 0 = 0 () = U_{21}() (0) for all A. Since (0) can arbitrarily be chosen in K^{m}, it is clear that U_{21}() = (0)nm,m for all A.
Since S_{U}^{0} = <e_{1}, ..., e_{m}>, there exist y_{1}, ..., y_{m} S_{U }, such that y_{j}(0) = e_{j}, the jth unit vector in K^{n}, for 1 < j < m. Let S_{ U }' := <y_{ 1}, ..., y_{m}>, then S_{U }' is an mdimensional subspace of S_{U }. In order to prove this, it is only necessary to show that y_{1}, ..., y_{m} are linearly independent. Let _{1}, ..., _{m} K, such that _{ i = 1}^{m}_{i}y_{i} = 0, then also _{ i = 1}^{m}_{i}y_{i}(0) = 0, which implies _{ i = 1}^{m}_{i}e_{i} = 0, so that _{1} = ... = _{m} = 0.
For y S_{U }' there exist uniquely defined _{ 1}, ..., _{m} K such that y = _{ i = 1}^{m}_{i}y_{i}. These _{i} can be read from y(0), since y(0) = _{ i = 1}^{m}_{i}e_{i}.
Define the m × mmatrix Y () corresponding to the chosen y_{1}, ..., y_{m} by
 (15) 
 (16) 
These equations are collected to the matrix equation
 (17) 
Again these equations can be collected for j = 1, ..., m and we derive
 (18) 
 (19) 
such that Y _{11}() is a k × kmatrix. We note that the “auxiliary” matrix function Y : A M_{m}(K), which will help us to describe the space S_{U } of solutions y (for given U), is in general not uniquely determined. However, from (17), from the decomposition of U_{11}(0) in Lemma 15 and the corresponding decomposition of Y () we see that Y _{11}() and Y _{12}() are uniquely determined by U_{11}(), namely
 (20) 
and we end up with the system of equations
 (21) 
 (22) 
 (23) 
Then it is possible to find a matrix B'' GL(m  k, K), such that the coordinate transformation on K^{m} induced by
satisfies
Let B be the corresponding coordinate transformation on K^{n}
If U is decomposed as in (13) and (14), then also UB has this property.
Without loss of generality we assume that the basis of K^{n} was chosen in such a way that Lemma 15 is satisfied and that and are a basis of V or W respectively. Then it is useful and important to partition Y () further as a 3 × 3 block matrix of the form
such that Z_{11}() = Y _{11}() M_{k}(K), Z_{22}() M_{mkr}(K) and Z_{33}() M_{r}(K). Hence
Let x = denote a vector in K^{mk}, where K^{mkr} and K^{r}. Then x belongs to W if and only if = 0. Moreover Y _{12}()_{W } = 0 for all A, which means Z_{13}() = (0)k,r for all A. From the definition of W it is clear that Z_{12}() = 0 for all A is equivalent to = 0.
The first line of (21) reads now as
From the second line of (21) we derive
and
Hence, each column of Z_{23}() is 0 K^{mkr}, so that Z_{ 23}() = (0)mkr,mkr for all A.
From (22) we deduce
Let M denote the matrix between the two braces [ and ], then each column of M is 0 K^{mkr} and consequently M = (0) mkr,k. Hence, we proved that
The same way we deduce from (23) that
and correspondingly
This finishes the proof of
Theorem 16. There exists a coordinate system of K^{m}, such that Y () is a solution of (19) if and only if Y () can be written as
where
is an exponential function, Z_{11}() M_{k}(K), Z_{22}() M_{mkr}(K), Z_{33}() M_{r}(K), satisfying the conditions Z_{11}(0) = Ik, Z_{22}(0) = Imkr, Z_{33}(0) = Ir, Z_{12}(0) = (0)k,mkr, Z_{21}(0) = (0)mkr,k, Z_{31}(0) = (0)r,k and Z_{32}(0) = (0)r,mkr. For 0 the matrices Z_{31}(), Z_{32}(), Z_{33}() can be arbitrarily chosen.
Next we describe the structure of S_{U } in more details.
Lemma 17. For each y S_{U } with y(0)0 there exists a subspace S_{U }' of S_{ U }, such that y S_{U }'.
Let N(S_{U }) denote the set . Then N(S_{U }) is a subspace of S_{U }. The appearance of this subspace N(S_{U }) of S_{U }, which is in general not , is one of the main differences to the case of mappings U: A GL(n, K). We will see that N(S_{U }) is closely related to the space W . This is described in
so that () = 0 for all A. For = 0 we derive
so that Y _{12}() () = 0 for all , A. This however implies that () W for all A.
Assuming conversely that (0) = 0, () = 0 and () W for all A, then it is obvious that z N(S_{U }).
In conclusion we get the following result:
Lemma 19. Let S_{U }' be an mdimensional subspace of S_{ U } (constructed as above), then S_{U } = S_{U }' N(S_{ U }).
We notice that in the decomposition S_{U } = S_{U }' N(S_{ U }) the space S_{U }' is in general not uniquely determined, whereas N(S_{U }) is unique by definition. However, S_{U }' can be any mdimensional subspace of S_{U }, such that the space of initial values y(0) (for y S_{U }') is already S_{U}^{0}.
As an immediate consequence we get
The following Theorem 20 yields together with Theorem 16 the structure of the space S_{U }, of all solutions of (2), for a given function U: A M_{n}(K). These theorems also contain necessary conditions on U in order to admit a nontrivial solution y.
Theorem 20. Let U: A M_{n}(K) be given and assume that dim S_{U}^{0} = m. Then there exist coordinates in K^{n} and solutions (U, y_{ j}) of (2) for j = 1, ..., m, such that y_{j}(0) = e_{j}. Moreover, U() can be written as in (13) and U_{11}() satisfies (20). Y _{11}() and Y _{22}() are the blocks in the first row of the matrix Y () given by (15). This matrix is also a solution of (19) and each element y of S_{U } can be expressed as y() = for given by (16) with arbitrary (0) K^{m} and () given by (24).
We finish by Theorem 21, which provides a construction of all solutions (U, y) of (2) by starting from an arbitrary subspace of initial values y(0) of K^{n}. This choice then leads via the block matrix Y () satisfying (19) to a matrix valued function U and a space S of solutions corresponding to U. In this general situation we do not discuss the problem when S = S_{U }.
Theorem 21. If Y () satisfies (19), U_{11}() is given by (20), is given by (16) for arbitrary (0) K^{m} and () given by (24), then (U, y) is a solution of (2) for
We can rewrite U_{11}(0)[Y ( + )Y ()  Y ()Y ( + )] as U_{11}(0)[Y ( + )Y ()  Y ( + + ) + Y ( + + )  Y ()Y ( + )] = U_{11}(0)[Y ( + )Y ()  Y ( + + )] + U_{11}(0)[Y ( + + )  Y ()Y ( + )], which is equal to (0)m,m since (19) holds.
In order to determine all solutions (U, y) of (2) we start with an arbitrary mdimensional subspace S^{0} of K^{n} for some 0 < m < n. Let be a basis of S^{0}, then there exists a matrix B GL(n, K), such that Bb_{i} = e_{i} for 1 < i < m. Hence BS^{0} = <e_{ 1}, ..., e_{m}>. For each solution Y () of (19) described in Theorem 16 let U_{11}() be given by (20) and U() be given by (13) with arbitrary matrices U_{12}() and U_{22}(). Then each element y of
is together with U a solution of (2). Due to this construction T ^{0}, the space of initial values y(0) for y T , is equal to <e_{1}, ..., e_{m}>. According to Lemma 1 each pair (UB, B^{1}y) for y T is a solution of (2) and = S^{0}. Hence, by varying S^{0} over all subspaces of K^{n} we determine all solutions (U, y) of (2).
[1] H. Fripertinger and J. Schwaiger. Some applications of functional equations in astronomy. Grazer Mathematische Berichte, 344 (2001), 16.
[2] R. Lidl and H. Niederreiter. Finite Fields, volume 20 of Encyclopedia of Mathematics and its Applications. AddisonWesley Publishing Company, London, Amsterdam, Don Mills  Ontario, Sydney, Tokyo, 1983. ISBN 0201135191.
[3] M.A. McKiernan. The matrix equation a(xoy) = a(x)+a(x)a(y)+a(y). Aequationes Mathematicae, 15 (1977), 213223.
[4] J. Schwaiger. Some applications of functional equations in astronomy. Aequationes Mathematicae, 60 (2000), p. 185. In Report of the meeting, The Thirtyseventh International Symposium on Functional Equations, May 1623, 1999, Huntington, WV.
HARALD FRIPERTINGER
