2016-05-12

My book, "The Foundations of Physical Reality" (see http://foundationsofphysics.blogspot.com) is actually little more than a logical proof that modern physics is essentially nothing more than the consequences of requiring internal consistency within our explanations. That proof is based upon the fact that explanations must be expressed with a language, essentially a finite collection of concepts designed to provide a representation of one's experiences. The problem of creating an internally consistent explanation of our experiences, without constraining the opening presumptions in any way, is examined in detail. The method makes use of a totally new way of representing one's experiences.

Our knowledge of the universe is built entirely on our personal perceptions. That these perceptions arise from our interpretations of earlier experiences is an issue seldom considered by the scientific community. Fundamentally, we must be able to identify what it is that we perceive before we can make use of those perceptions to build a mental model of them. What I am trying to point out is that we are not born knowing what our experiences signify. That is a subject we must learn as children long before we build any real knowledge of the universe.

It is interesting to note that "language" is something inherently associated with identifying our perceptions and children are not born knowing a language. Clearly we cannot communicate our understanding without first possessing some aspects of a language. For that same reason I contend that we cannot even think about our experiences without some understanding of what one could call "language associations" with those relevant perceptions. There is an absolute necessity to identify those experiences before making any associations between them.

From that perspective, learning a language and comprehending our perceptions of reality are intimately bound to one another. That idea led me to a very interesting aspect of the phenomena. Humanity has managed to develop many thousands of different languages to express their thoughts. Translating one language into another is not a trivial process. Even now, there exist a number of historical languages which have not yet been translated into any "modern" language.

What I am trying to point out is the fact that languages are themselves arbitrary constructs not constrained by the actual experiences themselves. The existence of secret codes is an excellent example of that inherent freedom. A secret code can represent all the meanings required to communicate any idea without allowing translation unless one has some information about the concepts being represented by that secret code.

A subtle means of avoiding the logical problems of actual "language" creation.

Clearly every human (including the most brilliant scientist who ever lived) can be seen as beginning life as a child born without a language. During his life he will experience many interactions with reality, including the many experiences central to learning the language which he will eventually use to express his understanding of reality. The total number of experiences standing behind his knowledge may be unbelievably large but it is nonetheless finite. That means that the entire collection of his experiences (expressed via whatever language he has learned to use) can be seen as a finite collection of known facts.

The total number of "concepts" expressible via that language could also be listed. Once such a list of concepts was constructed, each and every one could be given a specific numerical index which could be used to refer to that concept (think of that index as a secret means of referring to a specific concept). Using that collection of numerical indices, any experience could be specified via the notation [math](x_1,x_2,\cdots,x_n)[/math] where each [math]x_i[/math] is the specific numerical index of a required concept.

It should be clear to the reader that [math](x_1,x_2,\cdots,x_n)[/math] is essentially an abstract representation of a thought (or a collection of thoughts) in the scientists personal language. If you wish you can see it as a secret code for those thoughts understood only by the creator of that "list" of concepts. The important issue here is that there can exist no thought conceivable by that scientist which cannot be expressed by the simple notation [math](x_1,x_2,\cdots,x_n)[/math]. This trivial expression could be a single comment, a sentence, a book or an entire library as "n", the number of actual elements, has not been specified and is thus an entirely open issue.

Given the above notation, the scientist's explanation of his experiences (essentially his explanation of reality itself) can be represented by [math]P(x_1,x_2,\cdots,x_n)[/math], where P stands for the probability he holds that specific thought being represented by [math](x_1,x_2,\cdots,x_n)[/math] to be true. Note that "internal consistency" is a very simple aspect under this representation. Under the explanation being given, the truth of the specified thought is a function of the explanation and cannot change except by changing either the "thought" being represented by [math](x_1,x_2,\cdots,x_n)[/math] or the "explanation" itself.

A rather obvious next step.

Suppose we have the list of concepts represented by the relevant language used by the individual of interest and have created the required collection of numerical indices [math]x_i[/math]. If we also know his explanation of reality, we then know the value (or the possible range of values) his explanation would assign to [math]P(x_1,x_2,\cdots,x_n)[/math] for each and every representable thought. Notice also that this expression of a "fact" expands the collection of possible thoughts to invalid expressions (where [math]P(x_1,x_2,\cdots,x_n)[/math] would be zero) or possibly thoughts which may or may not be true (where [math]P(x_1,x_2,\cdots,x_n)[/math] could have any value between zero and one).

Notice also that the order of the concepts in the original list of concepts is totally immaterial. For the sake of argument, suppose one were to take exactly that same set of concepts and arbitrarily reorder that list. We could thus assign a totally different collection of numerical indices to exactly that same set of  concepts. In this case, I will refer to the new index assigned to a specific concept as "[math]z[/math]". Each and every thought of interest can then be expressed via [math](z_1,z_2,\cdots,z_n)[/math]. If, within any specific expressed thought, each and every [math]z_i[/math] refers to exactly the same concept previously referenced by [math]x_i[/math]; the probability expressed by [math]P(x_1,x_2,\cdots,x_n)[/math] must be absolutely identical to [math]P(z_1,z_2,\cdots,z_n)[/math]. This fact can be seen as a direct consequence of the absolute arbitrary nature of language itself.

There is a very interesting relationship embedded in the above observation. A simple additive shift in value of the relevant indices leads to the interesting idea that these numerical indices could easily be seen from a graphical perspective as points on a specified line. In such a case, that simple additive shift amounts to nothing more than a shift in the origin of that graphic representation.

[math]P(x_1+a,x_2+a,\cdots,x_n+a)\equiv P(x_1,x_2,\cdots,x_n)[/math].

If the two cases, [math]a=c+\Delta c[/math] and [math]a=c[/math] are examined, the above yields exactly the underlying relationship fundamental to calculus:

[math]\lim_{\Delta c\rightarrow 0}\frac{P(x_1+c+\Delta c,x_2+c+\Delta c,\cdots,x_n+c+\Delta c,)-P(x_1+c,x_2+c,\cdots,x_n+c)}{\Delta c}[/math].

Note that, in the specific case being examined, the numerator must always be zero and the denominator is never zero (the limit may be zero but, in classical calculus, that limit is never actually reached). Anyone familiar with calculus will recognize the above expression (which exactly vanishes here) is the definition of the derivative of P with respect to c, the shift in the position of the origin.

In fact, if [math](x_1,x_2,\cdots,x_n)[/math] is seen as a collection of points graphically plotted on an "x" axis, the above represents exactly what is commonly referred to, in a graphic representation of a set of points as shift symmetry; simple movement of the origin of that coordinate axis by the distance "c". This suggests that the information represented by [math](x_1,x_2,\cdots,x_n)[/math] could possibly be seen as equivalent to a graphical pattern of plotted points.

Such an interpretation should not be seen as unreasonable as any written language can generally be seen as a collection of specific inked points on a sheet of paper (which amounts to little more than a more complex graphic representation in two dimensions).

Shifting the representation from a list of numerical indices to points on a line.

Nonetheless, in this particular case we are speaking of an arbitrary collection of numerical indices and that fact introduces at least two very specific difficulties inherent in the proposed conversion.

The first problem arises because most all expressions used in any language use "order" as a significant aspect of the representation. In the suggested graphic representation order is completely lost. The "i" attached to each "[math]x[/math]" specifies the position of that particular concept in the represented thought itself and has nothing to do with the value of the specific numerical index "x": i.e. the actual point in the proposed graphical representation is the value of that numerical index and the "order" of the elements in any given language representation of that fact is totally lost in the suggested graphic representation.

That problem is handled in Chapter 2 of my book by introducing a hypothetical parameter "t" to the representation. Each and every expression [math](x_1,x_2,\cdots,x_n)[/math] is replaced by a collection of expressions [math](x_1,x_2,\cdots,x_r,t)[/math] where the value of "t" specifies the order assigned to the various expressions. All concepts represented by the original fact are now represented by a collection of expressions, each of which is totally free of order information. The final result is that the parameter "t" ends up becoming totally analogous with the ordinary concept of time (in fact, that is why I call it "t"). It should be clear that exactly the same issue is embedded in all known languages: i.e. order is actually no more than the order with which the concepts are considered (they are observed at different times).

One must still handle the possibilities of multiple simultaneous references to the same concept in the proposed graphical representation. If the concept referred to in the (j)th position is identical to the concept referred to in the (k)th position, the specific numerical indices [math](x_j=x_k)[/math] will plot to exactly the same point and again information will be lost regarding the number of occurrences of that specific concept within the represented fact.

I handle this difficulty by adding a new axis orthogonal to the proposed "x" axis which I define to be the "tau" axis. Any number of occurrences of a specific x index can then be plotted to a different [math](x,\tau)[/math] point in the created graphic representation.  It should be obvious that the tau position of any reference point cannot be knowable thing as it is not set by any interpretation of the concept index nor the position in the represented fact. Thus my final representation contains an axis orthogonal to x where position is one hundred percent unknowable. This specific uncertainty turns out to generate exactly the same phenomena brought up in quantum mechanics.

Carrying the differential relationship to a fundamental equation all explanations must obey.

Chapter 2 also includes definition of a number of other concepts introduced in order to cover all possibilities and problems. The introduction of time above ends up yielding the concept of momentum (changing position over time is related to the partial with respect to time). The absolute uncertainty of tau thus suggests quantization of momentum in the tau direction. The final result is the fact that quantization of momentum in the tau direction yields a component in my geometric representation which has exactly the properties of "mass" defined in classical mechanics.

Another issue introduced in chapter 2 is an open ended collection of "hypothetical" concepts (to open up the possibility of "new knowledge") and a definition of [math]\Psi[/math] which guarantees that the product of [math]\Psi[/math] times its complex conjugate, [math]{\Psi} ^{\dagger} \Psi =P[/math], is bounded by one and zero. In addition a "rules" term, constructed via use of a "delta" function, is introduced and explained. The net result of these definitions is the fact that the following equation must be universally valid for all possible internally consistent explanations of absolutely any collection of relevant facts.

[math]\left \{ \sum_{i} {\vec{\alpha}}_{i}\cdot {\vec{\triangledown}}_{i}  +\sum_{i\neq j} \beta_{ij}\delta({\vec{x}}_i-{\vec{x}}_j) \right \} \Psi=\frac{\partial }{\partial t}\Psi = im\Psi[/math].

Note that the definition of time in this presentation is not "what clocks measure", it is instead the constraint that objects cannot interact except when they exist at the same time and place; i.e., the old fashioned definition of time which existed prior to the invention of clocks. Clocks are complex entities and what they actually measure is a question which is handled in Chapter 4.

Chapter 3 is concerned with reducing the above equation to something which is soluble.

It should be noted that my result consists of a representation of the entire universe. At this point, I transform that representation into an equation constraining only a single element while directly including the influence of the rest of the universe via an interaction which I represent symbolically as  [math]\overset{\Leftrightarrow}{F}[/math]. What I  presume is that, whatever the solution is to the rest of the universe may be, it must be included in the correct solution of the local problem. Exact representations of elements within [math]\overset{\Leftrightarrow}{F}[/math] are presented in chapter 5.

This is totally counter to the common physics procedure of simply dealing with the abstract behavior of a system by totally ignoring the rest of the universe. My procedure directly considers impacts which are not even considered in common physics: i.e., the direct impact of the rest of the universe is implicitly contained in [math]\overset{\Leftrightarrow}{F}[/math].

I also present a rather simple and straight forward means of converting the solution of the universe into an expression which consists of a solution to the "local" problem plus a specific solution to the rest of the universe. The combination yields some very interesting relationships involving the consequence of that important factor.

Also, in chapter 3, the single x axis is converted into a three dimensional space represented by (x,y,z) axes. The reasons why this is necessary revolve around including our existence within this physical space. The arguments are relatively simple and arise to allow our existence within the conceptual picture.

It turns out that, if we are presumed to exist in a one dimensional universe, the interactions available to an intelligent entity are severely constrained. A two diminutional representation turns out to be insufficient to relieve those constraints; however it turns out that the problem discussed in chapter 3 can be solved by dividing those concept indices into triplets and replacing each triplet set, [math]x_i[/math], [math]x_k[/math] and [math]x_j[/math], with [math](x_i,y_i,z_i)[/math]. This simply changes our graphic representation from points on a line to points in a three dimensional space: i.e., there is absolutely no change in the information being represented in the original definition of "facts".

It should also be noted that the resulting space of interest, as defined (x,y,z,[math]\tau[/math]), is an ordinary four dimensional Euclidian space and curved coordinates are never required. It turns out that the curved space presumed by Einstein's theories are simply not required in the presentation I deduce.

Chapter 4, a macroscopic analysis of relativistic effects.

In chapter 4, I consider macroscopic interactions. The analysis leads directly to relativistic effects which I work out in detail. The tau axis can not be directly observable as positions of the relevant points in the tau direction are totally unknowable. However, movement in the tau direction is nonetheless a real solution of my fundamental equation. As already mentioned, since position in the tau direction is totally unknowable, that movement together with the uncertainty in tau requires momentum in the tau direction must be quantized (my fundamental equation has exactly the structure of  a wave equation). As a consequence dynamic momentum in the tau direction turns out to be absolutely equivalent to mass in the standard modern physics presentation.

My results are identical to the known results for special relativity and yield only some very small changes in general relativity. In my analysis, general relativistic results end up differing by an extremely small factor (my results contain a radial term which does not exist in Einstein's general relativity). Anyone familiar with general relativity should be aware of the fact that both orbital times and photon deflection are very small effects only observable in close gravitational interactions where the impact of a small radial term would be extremely difficult to resolve.

For many years I presumed that the radial term I had deduced was far too small to generate any realistic measures; however, I now suspect that this term could very well explain some of the subtle measurements obtained recently through satellite tracking data. The radial position of a satellite (through the use of the fixed speed of light) can be exactly calculated and the velocity can be obtained from doppler effects on the signals originating on the satellite. Apparently such measurements today indicate a minor inconsistency with modern physics.  This result could be no more than failure to include that radial term I deduce. It could also resolve some of those "missing mass" issues which have apparently appeared in modern studies of the universe and generally referred to as "dark matter".

Chapter 5, a careful analysis of microscopic interactions.

I will leave chapter 5 to the interest of the community. What I do is use the analysis of the impact of [math]\overset{\Leftrightarrow}{F}[/math] developed in chapter 3 in order to calculate the consequences of microscopic interactions. The result of that work is to show that Maxwell's equations are approximations to the fundamental equation developed in Chapter 3. My deductions lead to a few additional terms which should be included in Maxwell's equations. The result is to include massive exchange elements which yield a result quite analogous to common nuclear interactions.

Chapter 6. analysis of the consequences of additional dimensions.

Additional dimensions were introduced in chapter 3 to solve some issues which arose. Clearly there is no reason not to introduce further "additional dimensions". In chapter 6, I look at my fundamental equation under the perspective of n dimensions where n is equal to the number of points in the original expression of the universe with an identical "t" parameter. A careful analysis of that perspective ends up allowing a closed form solution of the fundamental equation.

The final conclusion is that absolutely any arbitrary universe representable by a pattern of fundamental concepts may be seen as exactly representable by the orientation of an n-dimensional equilateral polygon. Furthermore, evolution may be represented by rotation of that polygon within that n-dimensional universe. Furthermore, rotations must be representable as quantized pseudo angular momentum.

I also consider that n-dimensional equilateral polygon projected onto an arbitrary three dimensional space and essentially obtain exactly the results presented earlier.

My presentation is not a theory but is rather a careful tautological proof. If errors exist in that proof, I would like to see them pointed out.

Or, if anyone finds the above to be difficult to understand, i am open to questions.

Thanks -- Dick

Show more