From: Anders Sandberg (asa@nada.kth.se)
Date: Wed Apr 09 2003 - 06:41:25 MDT
On Wed, Apr 09, 2003 at 12:59:50PM +1000, Damien Broderick wrote:
> At 05:13 PM 4/8/03 -0700, Robert wrote:
>
> >the problem does arise that the atomic structure of the
> >human body is a *large* amount of information -- I've
> >never done the calculation
Freitas has of course done it:
http://www.foresight.org/Nanomedicine/Ch03_1.html
"The human body consists of ~7 x 10^27 atoms arranged in a highly
aperiodic physical structure"
If we demand that each atom be located with a precision of 100 pm
(a bit less than most bond lengths) that means we can locate them
with integers between 1 and 10^10. That means 3*34 bits to get each
position (I doubt we need velocities), and 5 bits for atom type.
All in all, 7.49e29 bits. This is actually far more than than Egan's
exabyte (by a factor of 8.12e10), but I guess that is because there
is no packing of the data.
Atom types are very unevenly distributed (~2/3 hydrogen, ~1/4
oxygen, ~1/10 carbon, 1% rest), so if we Huffman-code the atoms
H=0, O=10, C=110, the other = 111... we will gain a lot of bits.
Just a simple spatial zip-like algorithm could compress this data
even better: most oxygen atoms are always found in close relation
to two hydrogen atoms, and hence the 321 bits of each water
molecule could be replaced with just a marker of molecule type,
location and orientation. There are ~5000 molecular types, and
these can also be Huffman coded. The win, if we assume the average
molecule to have ten atoms we go from 1070 bits per molecule to 134
(assuming 256 possible orientations for each Euler angle and on
average around 8 bits compressed molecule ID), a win of almost 90%.
Larger molecules are probably best divided into smaller modules
like amino acids or functional groups. If we could repeat this kind
of chunking on larger scales (proteins, DNA, organelles) we could
probably win a lot more - collagen and actin fibers are eminently
compressible (IMHO).
Still, I think Egan did not think his polis citizens did lug around
models of the glycin molecules in each cell in their little toe;
most likely they relied on some standardized cell models or even
higher level models as described in Permutation City. If we model
the bodies with millimeter-precision voxel models we get around
a billion voxels, and if each voxel has a megabyte of data we get
just 10^12 bytes or 8e-7 exabytes.
Assuming a spatial subdivision scale of r, we have (1/r)^3 voxels
each with I bytes of info. So the relation for having a exabyte
description is 2^60=I/r^3. Using a one micron voxels, each voxel
would have to contain about one byte. Maybe that is the basis for
Egans guess.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Wed Apr 09 2003 - 06:46:29 MDT