Re: Mostly stuff about software (was Homeless + Jobs, Lots of stuff about Software world)

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Sep 24 2000 - 18:37:52 MDT


James Rogers wrote:
>
> This is my biggest objection to oppressive taxation; it damages my ability
> to invest time and money in necessary technological R&D. Of course,
> the government that doesn't trust me with my money (ahem, Gore,
> Nader, et al.), is the same government that I would hopefully obsolete
> through the development of fundamentally important new technology.
>

I would be very curious how you would obsolete the attitudes and beliefs
that that big government grows out of with only technology. Maybe
something that raises the general IQ across the board?
 
> > A nominal 9-5 (or part time) job will pay for the rent, and leave time
> > for the hobbies. If I can't do truly interesting things in my job, I
> > could as well let the nominal day job pay for my rent, and do
> > interesting things in my spare time.
>
> It depends on what your hobbies are, as some can be rather expensive.
> Or maybe I just try to do too many things at once...
>

Also, I get too much of my things I love to do involved in my 9-5 and
vice-versa. I am not worth a salary high enough to support myself, my
dependents and my hobbies if some of what I really love is not part of
my 9-5. Although much too much time/energy is wasted fighting the
warpage of the things I really care about by most business
environments. Not many high-end software jobs today can be easily
restricted to 9-5.

 
> > > Don't bet on it. What changes between then and now is the raw power of
> > > the hardware. As it becomes more powerful it becomes more tractable to
> > > automate large segments of the work programmers currently do. Of
> >
> > Sure, now it's easier to make a GUI, by just painting it on the
> > screen. Program generators are not exactly new. Apart from wizards
> > like our very own James Rogers (and sci things like ATLAS,
> > automatically juggling source to optimize for a given architecture) we
> > don't see automatic programming hitting the streets any time soon. And
> > then people would still have to write specs in a formal language. Even
> > if you don't write a protocol stack explicitly, you still have to
> > codify it's behaviour.
>

Of course we are still in the early days of hardware improvement and of
human/machine interaction. I believe the day will come when you don't
explain a complex design requirement to the machine at any deeper level
than you would to a skilled human. Getting there will require a lot of
work across multiple disciplines.
 
> I should mention that I am able to do what I did be cause the domain
> is severely constrained. While I am working on the generalized
> problem, it is useful to both validate design components in a
> real-world environment and fund general development by selling limited
> implementations of the general technology to solve hard problems that can
> be effectively attacked with limited but relatively "smart" software. It is
> also a rich space for coming across interesting problems that need to be
> solved, which keeps me happy. :^)
>
> For the application I mentioned previously, I don't need a formal spec
> per se. Rather, I have a big goal (maximize profits) that doesn't
> change, and a small set of sub goals created by suits that are subject
> to occasional change. The dataspace is large (typically around 25Gb),
> quite complex, and subject to some rather sudden and dynamic changes
> both in content and nature. Given basic code on how to navigate the
> dataspace, I let the system figure out the best way to achieve the
> stated goals and to take advantage of emerging patterns. In short, I
> have limited the code generation to working with the business problem
> dataspace rather than to the framework itself; it puts the flesh on
> the skeleton. A set of algorithms that observe the system's runtime
> behavior determine when code gets re-written. One could easily write a
> book on this topic (though I certainly won't). It really only deviates
> significantly from the general problem of intelligent code generation in
> one aspect, but the difference is an important one.
>
> The biggest problem with runtime code generators is debugging the
> resulting mess. However, it has allowed me to work on the some
> of the many interesting problems of self-observation. Designing methods to
> resolve issues such as detecting complex and non-procedural
> infinite loops (e.g. infinite loops caused by how the data interacts with
> the code at runtime, without compile-time knowledge of what the data can
> look like) has been fun.
>

If the code-generator is well-designed / tested and builds in run-time
checks there is not quite so onerous a problem with debugging is there?
Some of the things you mention sound like good fun indeed.
 
> > Jeez. If you think C++ is an improvement upon C you really have a
> > strange mind.
>
> To my mind, useable C++ looks a lot like well-organized C. The only
> real improvement of C++ is that it formalized a syntax for things that
> good C programmers had been doing for a long time. For that reason,
> very few applications actually justify using C++ over C, if your C
> programmers are competent enough. And if they aren't competent with
> C, you certainly don't want them working in C++. :^)
>

Good C programmers do not generally encapsulate data structure and the
functions for manipulating those data structures well or at least not in
a way that enforces that encapsulation except by convention. Good C
programmers do not generally think through polymorphism/genericity. And
good C and C++ programmers generally believe a lot of mystical claptrap
about their ability to manage memory without a formal GC. And no, using
reference counting and "discipline" is NOT an acceptable solution.
 
> > I do not see anything new cropping up since
> > Lisp. Because I can't have Lisp machine in current technology (and am
> > too dumb/poor to afford a DSP cluster running Forth), I've settled on
> > Python/C in a OpenSource *nix environment (currently Linux).
>
> C/Python/Java on Unix is my environment of choice, both because I am
> extremely comfortable with them (C on Unix is like comfortable old
> shoes), and because I can use one of them well in every design space I
> am likely to come across.
>

I pick Python/C++ (for delimited tight things)/Java (mostly because of
size of Java savvy population and prejudice although it has a few
worthwhile features)/Lisp/Scheme/Smalltalk. All else being equal I will
reach for Python or Lisp first when attempting to model a problem with
Smalltalk running a close second. If I am coding some tight data
structure I will go to C/C++ as a sort of universal assembler.
 
> I have noticed that a lot of good programmers with broad platform
> experience tend to settle on the same tools with time. Despite coming
> from wildly different backgrounds, most of the programmers I know who
> have been working with computers for a long time seem to have very
> similar notions of what "optimal environments" actually are.
> Coincidence? I doubt it.

Most of the really good long-time programmers I know tend to bemoan that
lisp and smalltalk aren't used more often and haven't penetrated the
market more. :-)

>
> > > We are beginning to address problems of programming in the large but
> > > frankly many of the solutions are giant kludges that are severely
> > > over-hyped and over-sold. I have gotten quite disgruntled with this
> > > industry. We spend more time trying to lock up "intellectual property"
> >
> > Amen, verily, etc. etc.
>
> There is a lot of pressure (business and lawyerly) to turn
> everything into an intellectual property action. There are a lot of
> benefits, both material and immaterial, to doing so, *particularly*
> if your organization is poorly funded.
>

How is that? If you lock your software to "your organization" then that
software's survival and useability is limited to the context of "your
organization". But most organization can fire you for any reason
whatsoever at amy time so the "your" is rather pointless. And most
organizations have many conflicting pressures that determine whether and
to what extent a piece of local developed software will reach its
technological maturity or see the light of day at all. Those of us who
work at the system level (or at a more generic/abstract level) commonly
are working on things that crosscut any particular organization and its
politics and are enabling of but orthogonal to the organization's line
of business (unless it is a specialized software house). Thus it is
even more unpalatable and grossly inefficient for our work to be deemed
the exclusive property of the organization.

> I actually started the patent process for a few different thing
> (largely software technologies of the type mentioned above), but
> dropped it after deciding that I would probably have an easier time
> keeping the technology out from under the thumb of big business by
> *not* patenting the tech. That and I object personally to the
> software patent concept. Makes it a little harder to get funding
> though.
>

Agreed and YAY! Funding is an issue.

> Of course, Yudkowskian technologies have an *enormous* first-to-market
> advantage that is hard to ignore. :^)
>

> To me, OO attempts to enforce "organized" coding practices in syntax.
> Except for very limited cases, code re-use through inheritance is a
> nearly worthless feature in practice, and many times the headaches it
> can create offset any benefits. The biggest niceties of OO (IMO) are
> the typing models and interfacing conventions.
>

Code reuse through inheritance is the least important aspect of OO.
Encapsulation and polymorphism (genericity) are much more important.
Modeling and coding closer to the actual problem space is more
important. Grnated that many people using OO don't understand that.

> Again, if you write really good procedural code, you are not likely to
> benefit much from OO. Perhaps it would be better just to teach people
> good code design.
>
> > > We have automated certain classes of GUIs but the GUI story is far from
> > > complete or adequate. Many projects are now being seriously perverted
> > > to use only a Web Browser as their GUI! It is as if we are stepping
> >
> > Well, a web browser is an ubiquitous, easy way to control a system,
> > and you can put it within few 100 Bytes of assembly. A remote GUI is
> > not something too bad.
>
> The web browser is great for ubiquity, but has too many limitations at
> both the low- and high-level to make a good GUI. Usable for simple
> things (such as remote control), but very difficult to work with for
> applications with complex interfaces (imagine running an application
> like GIMP or Photoshop inside a web browser). However, there are a lot
> of advantages to having a network aware UI, a capability where MS
> Windows really blows goats.
>
> I would really like to see the development of a capable, clean,
> platform independent, network aware GUI system. Maybe something that
> falls somewhere in between X (too big and crufty) and the Web.
> Current widely available "cross-platform" GUI implementations such as
> the Java AWT are an abomination -- all of the downside and none of
> the upside IMHO.
>

I would like to see a "cloud of objects" world where it is not the
programmers business that there is not one gigantic machine
environment. As long as we conflate programs and software architecture
with network and OS topologies and vagaries we will imho be in the dark
ages of computing.

 
> > We do have an interest in peer to peer information sharing (though not
> > yet collaboration) rising recently. Parallel applications (based on
> > PVM/MPI message passing libraries), including cluster file systems are
> > fairly widespread in science, and soon commerce. High-availability and
> > high-performance clusters for commerce are the hottest topics right
> > now. Things are not hopeless.
>
> I think it will be five years or more or so before clusters become
> really ubiquitous. One of the biggest difficulties is that most
> programmers have no clue how to develop "cluster-aware" software, and
> aren't likely to get one any time soon. When you consider that most
> programmers still can't figure out how to design even moderately
> complex locking schemes for multithreaded applications on a
> single-processor system or to ensure something resembling
> transactional integrity of complex shared data, both of which have
> been around for years, I doubt that cluster-aware applications will be
> quick to penetrate most market spaces. It is easier just to throw
> bigger hardware at the problem.
>

Clusters are a tool for network topology and dependable service
delivery. Again, they should not be visible (except at the lowest
levels) to the programming/application space. If they are then
something is very wrong.

One of the things that make multi-threading and transactional integrity
hard is that most language environments give only a few blunt tools for
really addressing concurrency issues and most of the tools given have
gross impedance mismatches with the language. We do not to this day
have good long-transaction models or tools.

 
> The people that "get" cluster design now because they find it
> fascinating are going to be the majority of people who "get" cluster
> design later. Most programmers will implement clustering badly,
> which will essentially negate the benefits and slow adoption in many
> organizations.
>

Again, most programmers should have no need to "get" cluster design.
 
> A majority of programmers quickly lose competence when they move past
> the "one user, one CPU, one process, one machine" design space. In my
> experience, as more of these variables become >1, the greater
> difficulty many programmers have producing efficient code. Only a
> relative handful of working software engineers will produce efficient
> code design when you have multi-user, multi-process, SMP, and
> clustering as explicit design optimization considerations. Too many
> variables for some minds to wrap around I guess.
>

In a well designed system there is little need for most application
programmers and even many system programmers to worry about all of these
issues much of the time. A large part of successful systems programming
is keeping these issues out of the face of application programmers.
The human mind does in fact have limited ability to handle multiple
concerns at once. Which is one reason it is good to restrict concerns to
appropriate levels of a design/application/implementation space.

 
> > > good tools for finding the right components and simulating their
> > > interaction. Much of our code base is still language and OS dependent
> > > and not componentized at all. Most of code is still application
> > > stovepipes with little/no reuse or reuseability. In short, almost no
> > > automation or next-level engineering applied to our own work. It had
> > > better not continue like this.
>
> Part of the problem with components is that the components as designed
> today are only data aware in the grossest declarative sense. As a
> result, components are only useful for very narrowly defined problems
> that rarely translate into something that can be re-used for all but
> the most narrow domains. The brittleness and rigidity of current
> component design methodologies is part of the reason that "cut, paste,
> and modify" is a viable software development methodology.
>

Cut,paste, modify sucks big-time and produces krap systems. Components
are more difficult to write and can't be written well in broken toy
languages. Doing components well also requires some advances in our
ability to model some aspects of the semantics that we cannot model well
today. I do not understand your comment about data awareness re
components. From outside the component its data awareness is irrelevant
to the user as it gives a message/capability/functional interface only.
Would you want more than that?
 
> But components don't really matter; I have strong doubts as to
> whether components are a correct solution in the long-term anyway.
> Components exist more to solve a human weakness in software
> implementation than to solve any particular weakness intrinsic
> to software design itself. I can't imagine why a smart machine would
> use components to write software.
>

That is like saying that standardized reuseable parts in hardware exist
only to solve particular weaknesses intrinsic to hardware design. A
smart machine would use components in order to not reinvent/reimplement
the wheel every time it is needed, much as reasonably intelligent human
software designers also attempt to do. It is not possible to
think/design at increasingly more complex levels without reusing levels
of parts (components) already sufficiently mastered, generalized and
packaged.

- samantha

 
> -James Rogers
> jamesr@best.com



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT