Topics of current personal interest

Mitchell Porter (mitch@thehub.com.au)
Wed, 28 Jan 1998 19:43:14 +1000 (EST)


1. Design and construction of an assembler. There will
soon be a web page where you can go and describe a nanosystem
by shape and substance, and a Java applet will generate an
atomic blueprint to those specifications. I base this
prediction on discussions at nanocad@world.std.com.
The core of the applet will be based on
ftp://ftp.std.com/pub/wware/alg_gen.tgz.

Similar programs could be used to generate complete
nonsense (from a chemical POV), but as time passes more
and more chemical knowledge will be encoded in them,
and the designs will become more robust. So we will
shortly have a flood of DIY nanomachine designs. Maybe
it'll be a standard feature on some homepages - "Here
is my CV, here is my cat, here are my nanodesigns."

The question then becomes, how do you actually build an
assembler? The most promising path I've thought of would
involve what I call a "buckybot", a nanobot built of
fullerenes. This appears promising because we already
have fullerenes, and the guru, Richard Smalley, expects
nanotubes "to become available by the megaton, and to become
available with precisely controlled terminating groups."
(http://www.foresight.org/Updates/Update31/Update31.1.html)

Perhaps fullerenes with appropriate terminating groups can
implement the Merkle toolset for mechanosynthesis of diamond
(http://nano.xerox.com/nanotech/hydroCarbonMetabolism.html);
then "all" we need is a way to position and move the tools
with nanometer precision. (We don't initially need an onboard
nanocomputer; that's only for *programmable* assemblers, as
opposed to directly-controlled.)

2. Developing useful applications of NT. I recently made
a post to sci.nanotech, "Priority uses for nanotechnology",
which says what I'd say here, namely that it's time serious
design work began on the "utopian" applications of NT, so
that we're ready to use the technology the moment it
arrives. (Yes, I know "arrival" is not a literally instantaneous
thing, but I think it will be a *very* fast climb from
clunky limited assemblers to scary powerful world-class
assemblers, thanks to the rapidity with which one will be
able to build, test, and redesign. If nanotube mechanisms
aren't the start of this process, it might come from
biotech, through the use of ribosomes, as de novo protein
design improves.)

3. Picotechnology and spacetime engineering through the
use of "Q-balls". Q-balls are (wholly hypothetical)
solitons in scalar fields like the Higgs, inside of
which these fields take different values from outside.
To the extent that low-energy physics is governed by the
values of scalar fields (and in some models all particle
masses are set by the values of Higgs fields), physics can
be different inside a Q-ball. For example, a proton entering
a Q-ball might decay more quickly. Alexander Kusenko
(http://insti.physics.sunysb.edu/~sasha/) suggests
using Q-balls as an energy source: drop them in water,
tap the heat generated by proton decay
(http://www.newscientist.com/ns/970830/nqballs.html).
He also says (in his papers) that Q-balls are a feature
of the Minimal Supersymmetric Standard Model, which is a
conservative guess at physics beyond the Standard Model.
Kusenko seems to be the main theorist of Q-balls, although
they are originally the invention of Sidney Coleman.

An example of the advantages one might accrue by
modulating the values of the physical "constants":
http://xxx.lanl.gov/abs/gr-qc/9703017 tells us that
the probability of a wormhole tunneling into existence
depends on the value of the fine-structure constant
(i.e. on the coupling constant of QED).

See http://xxx.lanl.gov/abs/hep-ph/9707423 for a more
detailed exposition. "A Q-ball is essentially a new
universe in a nutshell." This is the sort of remark
that set me off. Q-balls may be an enabling technology
for the creation of baby universes. Alan Guth
(co-discoverer of inflation) says somewhere (reference
not to hand, sorry) that it would only take a few Kg
of mass-energy to create a new region of inflation.
Of course, it needs to be concentrated into a *very*
small area, and to do that might require huge forces,
but do we really need to go as far as, say, stellar
collapse? Perhaps we can do it using massive Q-balls,
or mini black holes created through Bose-Einstein
condensation of asteroidal quantities of rubidium,
or some such thing.

Gregory Benford apparently has a new novel, _Cosm_,
in which there's a baby universe created; it will
be interesting to see what mechanism he proposes.

Something else we need: a map of the thousands of
superstring vacua - what sort of structures can
form in each, how difficult it is to get from one
such state to another. Such transitions have been
modelled, to a limited extent (hep-th/9504145).

Another tangent: Planck-scale computers. Perhaps
one could use Bekenstein-bound-saturating states (e.g.
"BPS saturated states") as info-processors with
physically maximal storage density and 10^43 Hz speeds.
Greg Egan's "The Planck Dive" (Feb 98, "Asimov's
SF magazine", not yet seen by me) apparently describes
a related scenario.

4. Boundless energy and computational speedup from
networks of baby universes. People have been hazily
aware of these ideas for a while, but I'm not aware
of any quantitative exploration, probably because there's
no exact theory available.

If Tipler can get a boundless energy yield from the
collapsing universe in his Omega Point theory,
surely we can do the same from a boundlessly
growing array of baby universes. [The nature of energy
in GR still puzzles me; John Baez even says (in a recent
post on sci.physics.research, 21/01/98 for those who want
to look it up) it's not always conserved in GR!]

5. Conceptualizing and planning for "infinite futures".
For example:

One can imagine a very fast computer connected to
sensors and effectors throughout a baby universe.
The computer maintains a model of the state of the rest
of the universe, constantly updated via the sensors.
There is always a particular finite range of actions
open to it through the effectors. What it does is to
model the effects of each possible action, some degree
into the future, and selects an action on the basis of
some criterion ('value system'?) applied to the various
projected outcomes.

Is such a system feasible? Is it desirable? I find it
plausible that some such system is a feature of all
infinite futures worth discussing, and that the
interesting questions concern implementation and the
'value system' to be used.

6. Physics applets. I'd like to have pages where you
could go and see elementary physics in action. Imagine
applets implementing different physical models, for which
you could select initial conditions, and witness some of
the calculations involved. One might have each of the types
of approximate model listed in _Nanosystems_, Table 1.4, and
each candidate for fundamental theory from Newton through to
QED (QCD might be intractable for a web browser).

7. World models on the web.

First, I'd like to see Club-of-Rome-style world models
on the web. They should be incredibly easy to do, say,
in Javascript, the problem lies in finding a model to use.
I suspect that the most valuable ones are either proprietary
or languish in dusty academic obscurity.

Then, I'd like to see models which factor in things like:
space colonzation and development, the existence of matter
compilers, the growth of a new "humanly incomprehensible"
sector of the economy, and so on.

9. Net as a single distributed computational system.

I think here of projects like www.distributed.net.
I hope that it will eventually become a standard thing for
a networked computer to be part of a number of distributed
processing systems, to which it will donate computational
power when idle.

I also want to see intelligent 'query engines', to which
one can pose a natural-language question, which the engine
would answer by consulting knowledge databases scattered
across the net. This could be helped along if there was
a knowledge-representation equivalent of HTML: a language
for the representation of facts, in a form suitable for
computer manipulation. CycL (www.cyc.com) might be a
candidate, others can be seen at
http://mnemosyne.itc.it:1024/ontology.html.

The QED Project (http://www.mcs.anl.gov:80/qed/) is an
attempt to do this sort of thing just within mathematics -
it could later be the nucleus of a more general-purpose
effort.

10. Quantum compiler. We need a way to turn programs
into something that can be run on a quantum computer.

11. Brain models. Data, maps, computational models.

12. What it's like to be 'auto-omniscient' (knowing
everything about oneself) and 'autopotent' (having
the capacity to change any aspect of oneself - term
due to Nick Bostrom).

-mitch
http://www.thehub.com.au/~mitch