On Sun, 17 Sep 2000, Eugene wrote:
> Samantha Atkins writes:
>
> > closer. And a pretty good argument can be made that without
> > concentrations of wealth in private hands their is no energy/resources
> > to enable many types of innovation. Of course there are also
>
> Good point. Robbing the rich to feed the poor makes us collectively
> poorer. We'll never make it into space if there's not enough loose
> cash in the pockets of wealthy geeks. I think there *is* a window of
> opportunity, and we already let a lot of know-how (there is a reason
> it's called "rocket science") trapped in people's and group of
> people's heads drift to /dev/null.
This is my biggest objection to oppressive taxation; it damages my ability
to invest time and money in necessary technological R&D. Of course,
the government that doesn't trust me with my money (ahem, Gore,
Nader, et al.), is the same government that I would hopefully obsolete
through the development of fundamentally important new technology.
> A nominal 9-5 (or part time) job will pay for the rent, and leave time
> for the hobbies. If I can't do truly interesting things in my job, I
> could as well let the nominal day job pay for my rent, and do
> interesting things in my spare time.
It depends on what your hobbies are, as some can be rather expensive.
Or maybe I just try to do too many things at once...
> > Don't bet on it. What changes between then and now is the raw power of
> > the hardware. As it becomes more powerful it becomes more tractable to
> > automate large segments of the work programmers currently do. Of
>
> Sure, now it's easier to make a GUI, by just painting it on the
> screen. Program generators are not exactly new. Apart from wizards
> like our very own James Rogers (and sci things like ATLAS,
> automatically juggling source to optimize for a given architecture) we
> don't see automatic programming hitting the streets any time soon. And
> then people would still have to write specs in a formal language. Even
> if you don't write a protocol stack explicitly, you still have to
> codify it's behaviour.
I should mention that I am able to do what I did be cause the domain
is severely constrained. While I am working on the generalized
problem, it is useful to both validate design components in a
real-world environment and fund general development by selling limited
implementations of the general technology to solve hard problems that can
be effectively attacked with limited but relatively "smart" software. It is
also a rich space for coming across interesting problems that need to be
solved, which keeps me happy. :^)
For the application I mentioned previously, I don't need a formal spec
per se. Rather, I have a big goal (maximize profits) that doesn't
change, and a small set of sub goals created by suits that are subject
to occasional change. The dataspace is large (typically around 25Gb),
quite complex, and subject to some rather sudden and dynamic changes
both in content and nature. Given basic code on how to navigate the
dataspace, I let the system figure out the best way to achieve the
stated goals and to take advantage of emerging patterns. In short, I
have limited the code generation to working with the business problem
dataspace rather than to the framework itself; it puts the flesh on
the skeleton. A set of algorithms that observe the system's runtime
behavior determine when code gets re-written. One could easily write a
book on this topic (though I certainly won't). It really only deviates
significantly from the general problem of intelligent code generation in
one aspect, but the difference is an important one.
The biggest problem with runtime code generators is debugging the
resulting mess. However, it has allowed me to work on the some
of the many interesting problems of self-observation. Designing methods to
resolve issues such as detecting complex and non-procedural
infinite loops (e.g. infinite loops caused by how the data interacts with
the code at runtime, without compile-time knowledge of what the data can
look like) has been fun.
> Jeez. If you think C++ is an improvement upon C you really have a
> strange mind.
To my mind, useable C++ looks a lot like well-organized C. The only
real improvement of C++ is that it formalized a syntax for things that
good C programmers had been doing for a long time. For that reason,
very few applications actually justify using C++ over C, if your C
programmers are competent enough. And if they aren't competent with
C, you certainly don't want them working in C++. :^)
> I do not see anything new cropping up since
> Lisp. Because I can't have Lisp machine in current technology (and am
> too dumb/poor to afford a DSP cluster running Forth), I've settled on
> Python/C in a OpenSource *nix environment (currently Linux).
C/Python/Java on Unix is my environment of choice, both because I am
extremely comfortable with them (C on Unix is like comfortable old
shoes), and because I can use one of them well in every design space I
am likely to come across.
I have noticed that a lot of good programmers with broad platform
experience tend to settle on the same tools with time. Despite coming
from wildly different backgrounds, most of the programmers I know who
have been working with computers for a long time seem to have very
similar notions of what "optimal environments" actually are.
Coincidence? I doubt it.
> > We are beginning to address problems of programming in the large but
> > frankly many of the solutions are giant kludges that are severely
> > over-hyped and over-sold. I have gotten quite disgruntled with this
> > industry. We spend more time trying to lock up "intellectual property"
>
> Amen, verily, etc. etc.
There is a lot of pressure (business and lawyerly) to turn
everything into an intellectual property action. There are a lot of
benefits, both material and immaterial, to doing so, *particularly*
if your organization is poorly funded.
I actually started the patent process for a few different thing
(largely software technologies of the type mentioned above), but
dropped it after deciding that I would probably have an easier time
keeping the technology out from under the thumb of big business by
*not* patenting the tech. That and I object personally to the
software patent concept. Makes it a little harder to get funding
though.
Of course, Yudkowskian technologies have an *enormous* first-to-market
advantage that is hard to ignore. :^)
> > My greatest expertise is in object persistence. Persistence is far, far
> > from "automated". Persistence cross-cuts applications and products but
> > is often done as a series of hacks within a particular project
> > life-cycle. Or a product is bought that promises to take the worries
>
> OO is far from being the silver bullet, i.e. code reuse by inheritance
> from former projects does not seem to scale.
To me, OO attempts to enforce "organized" coding practices in syntax.
Except for very limited cases, code re-use through inheritance is a
nearly worthless feature in practice, and many times the headaches it
can create offset any benefits. The biggest niceties of OO (IMO) are
the typing models and interfacing conventions.
Again, if you write really good procedural code, you are not likely to
benefit much from OO. Perhaps it would be better just to teach people
good code design.
> > We have automated certain classes of GUIs but the GUI story is far from
> > complete or adequate. Many projects are now being seriously perverted
> > to use only a Web Browser as their GUI! It is as if we are stepping
>
> Well, a web browser is an ubiquitous, easy way to control a system,
> and you can put it within few 100 Bytes of assembly. A remote GUI is
> not something too bad.
The web browser is great for ubiquity, but has too many limitations at
both the low- and high-level to make a good GUI. Usable for simple
things (such as remote control), but very difficult to work with for
applications with complex interfaces (imagine running an application
like GIMP or Photoshop inside a web browser). However, there are a lot
of advantages to having a network aware UI, a capability where MS
Windows really blows goats.
I would really like to see the development of a capable, clean,
platform independent, network aware GUI system. Maybe something that
falls somewhere in between X (too big and crufty) and the Web.
Current widely available "cross-platform" GUI implementations such as
the Java AWT are an abomination -- all of the downside and none of
the upside IMHO.
> We do have an interest in peer to peer information sharing (though not
> yet collaboration) rising recently. Parallel applications (based on
> PVM/MPI message passing libraries), including cluster file systems are
> fairly widespread in science, and soon commerce. High-availability and
> high-performance clusters for commerce are the hottest topics right
> now. Things are not hopeless.
I think it will be five years or more or so before clusters become
really ubiquitous. One of the biggest difficulties is that most
programmers have no clue how to develop "cluster-aware" software, and
aren't likely to get one any time soon. When you consider that most
programmers still can't figure out how to design even moderately
complex locking schemes for multithreaded applications on a
single-processor system or to ensure something resembling
transactional integrity of complex shared data, both of which have
been around for years, I doubt that cluster-aware applications will be
quick to penetrate most market spaces. It is easier just to throw
bigger hardware at the problem.
The people that "get" cluster design now because they find it
fascinating are going to be the majority of people who "get" cluster
design later. Most programmers will implement clustering badly,
which will essentially negate the benefits and slow adoption in many
organizations.
A majority of programmers quickly lose competence when they move past
the "one user, one CPU, one process, one machine" design space. In my
experience, as more of these variables become >1, the greater
difficulty many programmers have producing efficient code. Only a
relative handful of working software engineers will produce efficient
code design when you have multi-user, multi-process, SMP, and
clustering as explicit design optimization considerations. Too many
variables for some minds to wrap around I guess.
> > good tools for finding the right components and simulating their
> > interaction. Much of our code base is still language and OS dependent
> > and not componentized at all. Most of code is still application
> > stovepipes with little/no reuse or reuseability. In short, almost no
> > automation or next-level engineering applied to our own work. It had
> > better not continue like this.
Part of the problem with components is that the components as designed
today are only data aware in the grossest declarative sense. As a
result, components are only useful for very narrowly defined problems
that rarely translate into something that can be re-used for all but
the most narrow domains. The brittleness and rigidity of current
component design methodologies is part of the reason that "cut, paste,
and modify" is a viable software development methodology.
But components don't really matter; I have strong doubts as to
whether components are a correct solution in the long-term anyway.
Components exist more to solve a human weakness in software
implementation than to solve any particular weakness intrinsic
to software design itself. I can't imagine why a smart machine would
use components to write software.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT