Theorists at the string conference in July 2000 were asked what mysteries still to be revealed in the 21st century. Participants were invited to help formulate the ten most important unresolved issues in basic physics, which were eventually selected and ranked by a prominent panel of David Gross, Edward Witten and Michael Duff. No questions were worth more than the first two problems posed by Gross and Witten, respectively: # 1: *Are all (measurable) dimensionless parameters that characterize the physical universe calculated in principle, or are some only determined by historical or quantum mechanical accidents and unpredictable?* # 2: *How can quantum gravity help explain the origin of the universe?*

A newspaper article about these millennial mysteries expressed some interesting comments on question # 1. Perhaps Einstein actually expressed it more sharply: *Has God chosen to create the universe?*“- which also sums up neighborhood 2. Although the eternal” maybe “may have had a” choice “in creation, the following arguments will conclude that the answer to Einstein’s question is an emphatic” No. ” spectrum of unprecedented, precise basic physical parameters can be detected calculated within one *single dimensional universal system* which naturally contains a literal “*Monolith*. “

Similarly, the article asked whether the speed of light, the constant and electric charge of Planck are unconditionally determined – “or should the values be what they are because of a deep, hidden logic. These kinds of questions come to a point with a conundrum involving a mysterious number called alpha. If you square the charge for the electron and then divide it by the speed of the Planck’s (‘reduced’) light times (multiplied by 4p times the vacuum permittivity), all (metric) dimensions (by mass, time and distance) interrupt, giving a so-called “pure number” – alpha that is just over 1/137. But why isn’t it just 1/137 or some other value altogether? Physicists and even mystics have tried in vain to explain why. “

Which is to say that although constants such as a basic particle mass can be expressed as a dimensionless relation to the Planck scale or ratio to a somewhat more precisely known or available mass unit, the inverse of the constant alpha electromagnetic coupling is uniquely dimensionless as a clean *‘fine structure number’ a* ~ 137,036. On the other hand, assuming a unique, always discreet or *Precise* fine-structure numeric exists as a “literal constant”, the value must still be empirically confirmed as a relationship between two *inexactly* determinable ‘metric constants,’ h-bar and electric charge e (speed of light c is accurate *defined* in 1983 the adoption of the SI Convention as an integer of meters per meter. second.)

So while this relationship has been deeply confusing almost from its inception, my impression upon reading this article in a morning was utterly astounding, a numerological question of discrepancy deserving of such distinction by eminent modern authorities. For I had been obliquely obsessed with the fs number in the context of my colleague A. J. Meyer’s model for a number of years, but had come to accept its experimental determination in practice and periodically considered the dimensional question to no avail. Gross’s questions thus served as a catalyst from my complacency; recognize a unique position as the only guy who could provide a categorically complete and consistent answer in the context of Meyer’s most important basic parameter. Still, my pretentious instincts led to two months of insane intellectual posting, until they repeatedly repeated a simple procedure examined a few years earlier. I just **looked** by the result using the 98-00 CODATA value of *to*, and the following solution immediately struck with full heuristic power.

For the fine structure relationship, (via h-bar) efficiently quantifies the electromagnetic coupling between a discrete unit of electric charge (s) and a light photon; in the same sense one *integers are discreet ‘Quantized’* compared to the ‘fractional continuum’ between it and 240 or 242. One can easily see what this means by considering another integer, 203, from which we subtract the 2-based exponential of the square of 2pi. Now add the inverse of 241 to the resulting number by multiplying the product by the natural log of 2. It follows that this pure calculation of the number of fine structures is exactly equal to

**137.0359996502301 …**– as here (/ 100) is given to 15, but can be calculated to any number of decimal places.

In comparison, given the experimental uncertainty in h-bar and e, the NIST evaluation varies up or down around the middle of 6 ” 965 ‘in the invariant sequence defined above. The following table shows the values of h-bar, e, their calculated ratios, and the actual NIST selection for *to* in each year of their archives as well as in CODATA in 1973, where standard double-digit +/- experimental uncertainty is in bold type in parentheses.

years …*h-* = N*H** 10 ^ -34 Js …… e = Ne * 10 ^ -19 C ….. *h /*e ^ 2 = *to * = ….. NIST value & ± (**SD**):

2006: 1,054,571 628 (0**53**1,602,176 487 (0**40**137,035,999.**6**61 137,035,999 679 (0**94**)

2002: 1,054,571 680 (**18**x) 1,602,176 53o (**14**o) 137,035,999.**0**62 137.035.999 11. (**46**island)

1998: 1,054,571,596 (0**82**1,602,176 462 (0**63**137,035,999.**7**79 137.035.999 76o (**50**island)

1986: 1,054,572 66x (**63**x) 1,602,177 33x (**49**x) 137,035.9**8**9,558 137,035,989 5xx (**61**xx)

1973: 1,054,588 7xx (**57**xx) 1,602,189 2xx (**46**xx) 137.03**6**0.043.335 137.036. 04x (**11**x)

So it seems that the NIST selection is pretty much determined by the measured values for *H* and e alone. As explained at http://physics.nist.gov/cuu/Constants/alpha.htmlby the 80s, interest shifted to a new approach that provides a direct determination of *to *by exploiting the quantum Hall effect, which is independently confirmed by both theory and experiment of the electronic magnetic instantaneous anomaly, reducing its already finer tuned uncertainty. Still, it took 20 years before an improved goal for the magnetic moment *g*/ 2 factor was published in mid-2006, when this group (led by Gabrielse for Hussle at Harvard.edu) first estimates *to* var (A 🙂 137.035999. 710 (0**96**) – explanation of the much reduced uncertainty in the new NIST list compared to the one in *H*-bar and e. However, recently a numerical error was detected in the initial QED calculation (A 🙂 (we refer to it as 2nd paper B 🙂 which moved the value of a to (B 🙂 137.035999 . 070 (0**98**).

Although reflecting an almost identical small uncertainty, this assessment is clearly outside the NIST value, consistent with h-bar and elemental charge estimates, which are independently determined by different experiments. NIST has three years to sort this out, but in the meantime faces an embarrassing irony, as at least the 06 choices for h-bar and e appear to be slightly skewed towards the expected fit for *to*! For example, adjusting the last three digits of 06 data for h and e to match our pure fs number provides an impossible adjustment to e alone in the h628 / e487,065 ratio. Had the QCD error been corrected prior to actual NIST publication in 2007, it could have been fairly easily adjusted to h626 / e489; however, questions its context in the last 3 digits *to *in terms of comparative 02 and 98 data. In any case, much faster improvements in multiple experimental designs will be required for a comparable reduction of error for h and to solve this problem for good.

But again, even where no ‘exact’ metric measure is maintained, it is still infinitely short of ‘literal accuracy’, while our pure fs number fits the current values of h628 / e487 exactly. In the former respect, I recently discovered a mathematician named James Gilson (see http://www.maths.qmul.ac.uk/%7Ejgg/page5.html ) also devised a purely numeric = 137.0359997867 … closer to the revised 98-01 standard. Gilson further claims that he has calculated several parameters for the standard model, such as. The dimensional relationship between the masses of a Z and W weak gauge boson. But I know he could never construct a single proof using equivalents capable of *derivative Z and / or W masses per se exactly from that point on confirmed lots of tongue quarks and *

__(see essay referenced in the resource box), which in itself is the result of a single overly dimensionless tautology. For the numerical discreteness in the fraction 1/241 allows one__

*Higgs fields*__Construct__

*physically meaningful dimensional equations*. If one instead took Gilson’s numerology, or the refined empirical value of Gabreilse one. al., for the fs number, it would destroy this discreteness, accurate self-consistency and ability to be even

*write*a meaningful dimensionless equation! In contrast, it is perhaps not surprising that after I literally ‘found’ the integer 241 and deduced the exact fine structure number from the resulting ‘Monolith Number’, it only took approx. 2 weeks to calculate all six quark masses using real dimensionless analysis and various finely structured relationships.

But since we are not really talking about the fine structure itself more than the integer 137, the result is *definitely answers* Gross’s question. For the “dimensional parameters that characterize the physical universe” (including alpha), there are relationships between selected metric parameters that lack a single unified dimensioned mapping system, from which metric parameters such as particle masses are calculated from set equations. The ‘default model’ provides a simple system of parameters, however **none** means to calculate or __predict__ any and / or all within a single system – thus the experimental parameters are inserted by hand.

Final irony: I’m doomed to be tainted as a “numerologist” by “experimental” who constantly fails to acknowledge hard empirical evidence of quark, Higgs or hadron masses that can be used to accurately calculate the current standard of the most precisely known and heaviest mass in high energy physics (Z). So contrary to foolish ghouls: empirical confirmation is just the last cherry the chef puts on top before presenting a “pudding proof” where no knowledge could resist just because he didn’t mount it himself, so instead gets a mimicked mess to the real deal does not look like. For the bottom of this pudding is made of melons, I call Mumbers, which are really just numbers, clean and simple!