Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

Interviews: Alexander Stepanov and Daniel E. Rose Answer Your Questions 42

samzenpus (5) writes "Alexander Stepanov is an award winning programmer who designed the C++ Standard Template Library. Daniel E. Rose is a programmer, research scientist, and is the Chief Scientist for Search at A9.com. In addition to working together, the duo have recently written a new book titled, From Mathematics to Generic Programming. Earlier this month you had a chance to ask the pair about their book, their work, or programming in general. Below you'll find the answers to those questions."
Early Soviet Computing?
by eldavojohn

Alexander Stepanov, I have never had a chance to ask someone as qualified as you about this topic. I grew up on the opposite side of the Iron Curtain and have constantly wondered if (surely there must have been) alternative computing solutions developed in the USSR prior to Elbrus and SPARC. So my question is whether or not you know of any hardware or instruction set alternatives that died on the vine or were never mass fabricated in Soviet times? I don't expect to you to reveal some super advanced or future predicting instruction set but it has always disturbed me that these things aren't documented somewhere -- as you likely know failures can provide more fruit than successes. Failing that, could you offer us any tails of early computing that only seem to run in Russian circles?

If you can suggest references (preferably in English) I would be most appreciative. I know of only one book and it seems to be a singular point of view.


Alex: I'm not sure I have any unique knowledge, but can only describe my own experience. The first computer I used was a mainframe called M-20 (or one of its derivatives). My first programming exam was pass/fail, but I had to take it several times before I passed. I had no idea how to write code -- I didn't attend lectures or participate in labs, and thought I could just study the book and take the test. But programming isn't like that; the only way to learn is to do it.

Then in my first job, I participated in the design of a minicomputer called TA-100 used to control hydroelectric power stations. I was one of thekey designers of the realtime operating system, a contributor to the instruction set, and the lead designer of the programming tools (debugger, assembler, linker, etc.) -- all written in assembly language. The fact that I started at a very low level -- gates and instructions -- continues to be useful to my work even today. About that time, the Soviet Union started copying American designs, but I was very fortunate to be able to design something original from scratch. The head designer of the TA-100, Aleksandr Gurevich, was a great mentor to me. Two of my senior colleagues, Ilya Neistadt and Natalya Davydovskaya, also spent a lot of time trying to teach me all the things I didn't know.

Despite my personal experience (many details of which I've forgotten), I'm not actually an expert on the history of Soviet computing. But there is a good website containing many articles in English about early Soviet computers. One radically different approach was the "Setun" ternary computer. Unfortunately, there is no detailed treatment of Soviet-era computing at the level of detail and insight found in the second ("Computer Zoo") volume of Computer Architecture by Blaauw and Brooks, which provides an exhaustive treatment of Western designs. In general, computer history is an important field and requires great dedication. My friend Paul McJones does a fabulous job on history of programming languages and other software artifacts. See, for example, his history of FORTRAN site. (He also created sites for Lisp and ALGOL.) Sadly, there is no comparable effort on Soviet computing.



Successor to C++
by Anonymous Coward

I remember you wrote that STL has nothing to do with C++, it was a framework for generic programming and that C++ was chosen for its first implementation because it was less deficient for that purpose compared with other commercial programming languages. That implies you'd like to develop a programming language from scratch. Is that so? If so, how is it going?

Alex: In my first experiments in building a component architecture, I tried to design a language from scratch, called Tecton, with Deepak Kapur and Dave Musser. Tecton was my second system (i.e. it suffered from The Mythical-Man Month's "Second System Effect"). It was an extremely high level language indeed, and had concepts, but was unusable for anything practical. Then I implemented a version of the library in Scheme (together with Aaron Kerschenbaum and Dave Musser), and then another version in Ada (with Dave Musser). I was hired by Bell Labs to join their C++ library team in 1987, which was my first exposure to C++. My C and C++ mentor was Andy Koenig, who helped me understand the overall logic of the language. Unfortunately at that time, C++ was not ready for STL.

I returned to the library work in 1993 at HP Labs, together with Meng Lee. C++ had just gotten templates, and we were able to create a large generic library. At Andy Koenig's suggestion, we submitted a version of it for inclusion in the C++ language standard. This became STL.

After STL was accepted into the standard in 1994, I started thinking about designing a minimal programming language that will allow even more intimate access to hardware than C/C++ and also provide support for concepts and generic programming. I was hoping that somebody would fund such an activity. I interviewed with several companies, proposing such a design, but there was no interest. A senior VP at Microsoft told me: "We are not interested in innovating in the direction you suggest." They were "innovating" in the direction of C#, trying to displace Java. The situation now is not any better. There might be some interest eventually, but it will happen after my retirement: I am no longer in the game.



STL
by serviscope_minor

I'm a huge fan of the STL, and I think the design has stood the test of time amazingly well. That said, you now hae a bunch of hindsight. What would you do differently knowing what you know now. Also if you were doing it today and using today's languages, how do you think it would differ?

Alex: STL is the result of many compromises. There was a tension between my research goals for generic programming and getting something approved by the different constituents of the standards committee with diverse technical, business, and personal agendas. Such compromises are inevitable in real life.

Having said that, here are some of the things I would have preferred were different:

As we discuss in From Mathematics to Generic Programming, my original name for iterators was "coordinates" (or more specifically, "linear coordinates"). The standards committee people told me that there was already a name for this concept, "iterator," so I should use that term. They were wrong -- they were confusing my coordinates with heavyweight stateful iterators found in languages such as CLU and Alphard. This unfortunate terminology still often leads to misunderstanding about the concept of iterator in generic programming. Furthermore, as far back as 1987 I knew that linear coordinates (i.e. iterators) were only one kind of coordinate structures, data types that allow one to navigate through data structures. There are coordinate structures that deal with multidimensional arrays, trees, graphs, etc. (See, for example, chapters 7 and 8 of Elements of Programming.)

Also, there are many different types of containers and STL provided only a rudimentary classification. Moreover, containers with ownership semantics constitute only one way of dealing with data structures. There are others. A properly designed library would be based on a far larger set of data structures than what I could include into STL. There are also simple mistakes in algorithmic interfaces. Partition should place non-satisfying elements before satisfying elements. Copy_n should return a pair. I should have included algorithms dealing with integer concepts. I should have resisted the pressure to include allocators.

It would make perfect sense to redesign STL from scratch when they put concepts into C++. I would recommend that a person who decides to do it, should carefully study both Elements of Programming and From Mathematics to Generic Programming. Both of these books expand on these issues.



Re:STL
by Pseudonym

Related question: C++ was originally conceived as "C + Simula", but something that is interesting about the STL is how non-object-oriented it is, in particular using no inheritance. If we were designing a new "better C" today, one that you'd be happy to implement a STL-like system in, knowing what we know now, would we bother with Simula-style objects at all?

Alex: I am still convinced that Simula/C++/Java style inheritance is unsound. I do believe, however, that there is sometimes a need for run-time dispatch. But run time dispatch should be done as a run-time concept dispatch. Imagine, say, writing code in terms of a pointer to forward iterator. One should be able to obtain affiliated types at run time. Eventually languages will unify object-orientation and generic programming, but nobody seems to work on it now.

Dan: Bjarne Stroustrup describes C++ as a multi-paradigm language. The features that support object-oriented programming and the features that support generic programming are, for the most part, independent. That doesn't mean that both sets of features are not useful. Could Alex have designed STL for a language that doesn't have object-oriented features? Sure. But as a programmer, I'm happy that both sets of features are available. Just because object-oriented features are not needed to implement STL doesn't mean they provide no value in the language.

Alex: C++ has evolved over many years, and many of its features (inheritance, templates, exceptions, namespaces, etc.) were incorporated based on other work. As a result, they don't always work well together, and even when they do, it's in a baroque way. Now that we as a community have many years of experience with these features, we could design a minimal language from scratch that incorporates these features in a more concise and elegant way.



Hardware evolution
by jonkalb

The STL is about three decades old. In that time, we've seen both OS and hardware evolution. What is the impact of these changes on how the STL should be used? How would the STL be different if it where implemented targeting modern environments?

Alex: STL is "only" two decades old, but yes, there have been important changes during that period that would lead to some different decisions. STL was actually designed on a Leading Edge PC with no cache and 640K memory. (Our group at HP Labs didn't have enough money in the budget for an HP PC. When HP CEO Lew Platt came to visit me, HP Labs' director rushed in beforehand to hide the Leading Edge PC.)

One of the biggest changes since then has been the growth of caches. Cache misses are very costly, so locality of reference is much more important now. Node-based data structures, which have low locality of reference, make much less sense. If I were designing STL today, I would have a different set of containers. For example, an in-memory B*-tree is a far better choice than a red-black tree for implementing an associative container.

Another change is the increase in pipeline depth and support for unaligned reads. Today it is cheaper to read extra data rather than to have a branch.

Most processors today also support SIMD instructions. Libraries should take advantage of them whenever they can.

Modern applications such as search engines and databases also use lots of collections of very small data items that can be stored compactly without an extra level of indirection by using variable-sized encodings. It is essential that the libraries provide support for these variable-size entities. Dan and I, together with colleagues at A9, worked on this. Sadly enough, we were not able to finish our work, although you can see some relevant code snippets using variable-sized types and a new data structure called "tape" here.



Search seemingly getting worse over time
by TWX

This is more for Daniel Rose, but to what do you attribute the seeming decline in the quality of search results? I used Digital's Alta Vista search engine when it was fairly new and it seemed revolutionary and seemed to provide me with exactly what I wanted. Over time that declined and Alta Vista as it was ceased to be, and Google initially also seemed to provide me with exactly what I wanted. Now it seems like I have to put a whole lot of thought into faking Google into performing a somewhat-boolean-style search for me, and normal boolean expressions themselves no longer seem to work.

Is this the result of attempting to dumb-down the interface for tailored results, or something else or more insidious? Obviously the amount of content on the Internet is growing, but the computing power to process through all of it is growing too, so I would expect it wouldn't be getting this much worse, this quickly.


Dan: This is a huge question, which could be the subject of a whole book by itself. But the short answer is that there are several factors that have made the search experience be (or at least seem) worse. Here are a few:

1. Size of the problem. In the early days of AltaVista, there were around 100,000 web sites. Today there are around a billion. Assuming the number of web pages has grown proportionally to the number of sites, that's a factor of 10^4. Search ranking algorithms have actually been improved a lot -- they might even be 10x better than they were in 1995. But they haven't improved by 10^4x.

2. Complexity of the problem. Originally, web search engines dealt with static HTML pages. Now they are expected to work with many different types of documents in different formats, with users having a much wider variety of search goals. At A9, which provides the search engine for Amazon.com, we optimized the system specifically for product search. Web search engines have to work for all kinds of search.

3. Adversarial relationship between sites and engines. In the early days of web search engines, most web sites were purely informational, and many were run by nonprofit organizations like universities. Even when for-profit companies put up web sites, most were offered as an informational service to their customers -- it was a cost to the company, not a source of revenue. Obviously, all that has changed. Now it's in the interest of most web site providers to drive traffic to their sites. To do that they want to rank higher in search results -- often even for queries where their content is not relevant. So there is basically an arms race between search companies, which want to accurately rank results by relevance, and so-called search engine optimizers, who want their clients' pages to rank higher regardless of relevance. This leads to all kinds of spam.

4. Business model. The invention of search advertising by Overture, and its adoption by Google and others, meant that it was more profitable to show an ad than an organic search result. I know of specific instances where search engine companies chose not to deploy relevance improvements, because that would reduce revenue (more people would click on the results, and fewer on the ads). Even if a company tries to have a separation between their relevance and advertising teams, it is very hard to serve two masters.

Regarding the use of boolean expressions, there is evidence both from cognitive psychology and from information retrieval research that most people don't understand boolean logic, and that this misunderstanding leads to worse results. So I claim that Google's decision to stop interpreting certain words and symbols as boolean operators is good user-centered design, not "dumbing down" -- but I wish they still provided an advanced search option for those who want it.

If you're interested in learning more about search user issues in particular, here's a lecture I gave at UC Berkeley about 10 years ago on that topic.



Re:Search seemingly getting worse over time
by MouseTheLuckyDog

I was wondering something similar. Often times recent news tends to overshadow search results.

Let me give a practical example. Grand Jury. proceedings have undergone serious reform since the 70s. In some states a target can demand to appear before the Grand Jury. In some states a No Bill precludes the State from representing the case. In others there must be clear new evidence before a case can be represented. I know one state has a three strikes rules for GJ proceedings ( sorry don't remember which).

The day before the Michael Brown shooting, a search on Grand Jury Missouri would have found several articles on the specific laws to Grand Jury proceedings in Missouri. The day the DA announced he would present the case to a Grand Jury, the same search gets hundreds of articles on news story about Michael Brown and Grand Jury proceedings, but it becomes impossible to find those same scholarly articles about the peculiarity of Missouri Grand Jury proceedings. Not even the relevant statutes from the state website. What can be done to mitigate this effect?


Dan: This is a good illustration of what a complex problem search is. There are two issues here: First, should search results change in response to news events? I think so; in your example, it's almost certainly what the majority of users are looking for.

Second, how can the search engine make sure that *other* relevant results are also findable? One way is to make sure that the results address a diversity of user intents. When I was at AltaVista and Yahoo, we did some research on how to identify different user intents and how to make sure the results were not dominated by just one. The query "grand jury Missouri" has at least two obvious intents: "give me information about the grand jury system in Missouri" and "tell me what is going on with the particular grand jury investigating Michael Brown's death."

There are techniques that can do this diversification, and some search engines use some of them. But perhaps a better approach is to recognize that information-seeking is a process, not a magic oracle. Search engines should be designed to facilitate a kind of dialogue with the user. At AltaVista, we had a feature called "Prisma" that would show 12 related queries right below the search box -- not just queries that shared substrings with what the user has typed (like autocomplete), but queries that were about the range of different topics discussed in the actual pages. So for the query "grand jury Missouri," one suggestion might be "Michael Brown news" and another might be "grand jury statute Missouri".

My advice to intelligent search users is to to imagine what terminology would be used in a hypothetical good result on your intended topic, and use those words. If you want to find information about the legal basis of the grand jury system in Missouri, don't just type "grand jury Missouri," type "grand jury statute Missouri." When I do that query today on Google, I get a 2009 publication from the state of Missouri explaining the grand jury process, and the text of the actual statute, both on the first page of results.



What's your time like?
by mlheur

How much of your time do you dedicate to computing vs doing other things; what are your other hobbies or is the work you do also your play time?

Alex: Over the course of my life I have gradually narrowed my focus to spending time on the few items that, to me, are the essential examples of their category. These are things that stood the test of time; I re-read the books I already have read; I listen to music I have listened many times before; etc, etc. Yes, there is a chance that I will miss a new Mozart or Euclid, but it is a chance I am willing to take. Also, like the Pythagoreans of old, I view these as part of a unity: music reflects mathematics, literature is connected with history, etc. My work and my play and my life are inseparable. This unity is also reflected in From Mathematics to Generic Programming, which blends math, programming, history, and sometimes philosophy and art. Here are some of my favorites:

Literature: Greek and Roman classics: Homer, Plato, Ovid, Seneca; Bible; "modern" novels from Swift and Sterne to Dickens and Anthony Trollope. Math and science classics: Euclid, Euler, Gauss, Poincare. I still use printed books, not e-books.

Music: Bach, Mozart, Beethoven, Schubert, Wagner, and Mahler. I tend to listen to many different interpretations of the same piece. I do not use MP3s or streaming music, but CDs and, recently, SACDs.

Movies and TV: Chaplin, Marx Brothers, Kurosawa, Satyajit Ray, Kenneth Clark's Civilization, Peter Brook's Mahabharata, Brideshead Revisited, Royal Shakespeare Company production of Nicholas Nickelby, Maigret with Bruno Cremer. I am a blu-ray enthusiast. I do not use Netflix or Amazon Instant Video.

I love dogs, especially Welsh corgis; I spend 1-2 hours a day walking my dog Maxwell. I no longer eat meat or milk. I have been very happily married for 45 years; my wife Helen is my closest friend. We are practicing Roman Catholics, go to church on Sundays and holidays of obligation and try to keep the commandments. Our political views are in line with Pope Francis: we believe in having an economically just society.

Dan: Ironically, during much of my career as a researcher and engineering manager, I had fairly little time for programming. But now that I am not currently working full time, one of the things I've been doing for fun is programming -- learning iOS development and writing a musical iPhone app. I also enjoy playing very basic guitar and piano, reading, and lately, writing fiction. I try to alternate between reading nonfiction and fiction. The last books I read are The Swerve: How the World Became Modern by Stephen Greenblatt (about how the rediscovery of an ancient Roman poem helped spur the Renaissance) and Dave Eggers' novel The Circle (a cautionary tale about social networking and privacy, which should be required reading for everyone who works at Facebook, Google, and Apple).



Re:ack-nak
by blue trane

When will programming evolve to use subject-predicate syntax, rather than function-argument? Function-argument goes back (at least) to Frege, and his prejudices against subject-predicate syntax (which dominates natural languages). But isn't changePassword(a,b) more ambiguous than "change the password from a to b"? Don't we get an "information gain" effect from using a syntax we are familiar with outside of programming? When you first come to a function-argument command such as (in Oz, which is used in the Paradigms of Computer Programming MOOC) {Push S X}, there is maximum entropy as to whether S is pushed, or pushed onto. "Push X onto S" has no entropy; you know immediately, from the syntax alone, what is pushed onto what.

Dan: I think you need to decouple your argument about entropy with your argument about subject-predicate syntax. IIRC, stack-based languages like Forth and Postscript (and old HP scientific calculators) had completely unambiguous syntax. You either push something on the stack or perform an operation on the required number of arguments at the top of the stack. But these are not subject-predicate syntax languages. So there is more than one way to have what you call no-entropy syntax. Another way to avoid ambiguity is to require that argument names be part of the function name, as Smalltalk and Objective-C do. Then instead of your function call being changePassword(a, b), it's [foo changePasswordFrom:a to:b] (where foo is the object getting the message).

Separate from the entropy issue, is there a cognitive benefit from having programming languages use syntax familiar from natural languages? Perhaps, but which natural language's syntax will you use? Many languages (e.g. Japanese) use a subject-object-verb syntax, while English uses subject-verb-object. Romance languages use SOV some of the time (e.g. with pronouns) and SVO the rest of the time. Talk about ambiguous argument order!

Furthermore, natural languages have evolved to convey all kinds of nuances and deliberate ambiguities that make it hard to specify anything precisely. As a small example, the English meaning of "and" and "or" is quite different from their Boolean interpretation. (If the waiter says that breakfast comes with juice or coffee, getting both is not an option.)

The business application language COBOL (the most popular language of the 1970s) was supposed to have "English-like" syntax, with expressions like "add 1 to x." I'm skeptical that this syntax made programming any easier, but it did lead to this old joke: "Did you hear about the new version of COBOL? It's called ADD 1 TO COBOL."

My opinion is that we will always have different languages with different styles of syntax, to meet the needs of different communities of programmers.



Why is Generic Programming often second class?
by Anonymous Coward

We see many programming languages with at least some support for Generics, but usually as a second class citizen, and often added as an afterthought in later releases, and subordinate to some other programming paradigm. Java is primarily OO, with generics added later. C# is also primarily OO, though with generic support. It took C++ several iterations to get generics, and C++ is "multi paradigm". Go doesn't have generics, and doesn't seem like it will not a while.

It seems to me like generic programming is sufficiently powerful as a paradigm to not need other paradigms like OO in the same language. In fact, in many ways, OO, which ties together data and algorithms, seems antithetical to generic programming. So, do you see a possibility of a programming language whose primary paradigm is generic programming? Why do language designers not get generics into the first releases of their languages, even now, when the issues would seem to be well known? What would such a language look like?


Alex: To design a language for generic programming, one needs to learn to program generically. One has to write lots of code before things become clear. In the Appendix B of Elements of Programming, Sean Parent and Bjarne Stroustrup outlined a minimal language needed for programming. The appendix is about 8 pages long. To make it real, it probably needs to grow by a factor of 3. So, something like 25 pages should be sufficient. I am too old to do it, but I wish that someone would try.

A more difficult problem is not to design the language: C++, after all, contains most of the things needed. The problem is to teach programmers to think abstractly. And that is a very difficult task. I do not know a single university where one could even learn the preliminaries: understanding the machine, and understanding abstract mathematics. Our new book, From Mathematics to Generic Programming, is an attempt to sketch what is needed. Hopefully some school will try to teach both assembly level programming and abstract algebra.

An even harder problem is to convince the software industry to build software out of carefully designed components. What I see, however, is the movement in the opposite direction. Hand-crafted, one-off, undisciplined code is impossible to replicate. Adobe did a fabulous job specifying Postscript; that allowed Peter Deutsch to single-handedly produce Ghostscript. Now Adobe is not going to specify Photoshop's behavior. Let the Gimp guys try to replicate it. While Linus Torvalds was able to replicate Unix from the carefully written System V interface definitions, no one could replicate Windows: being nonstandard creates barriers to entry. There are grave economic reasons making any progress unlikely while undisciplined programmers generate huge amount of capital. It's analogous to the programmer whose terrible spaghetti code gives him job security, since no one else can understand it.

Dan: The idea that object-oriented programming and generic programming are competing paradigms is, in my opinion, mistaken. They are really orthogonal approaches. As we discuss in our book, generic programming is really an attitude. This attitude is useful whether you are using an object-oriented approach or not.

I would love to have a real-world, efficient, popular language that supports generic programming -- including concepts, in particular -- as first-class features. But I see no reason why this language shouldn't also support OOP.
This discussion has been archived. No new comments can be posted.

Interviews: Alexander Stepanov and Daniel E. Rose Answer Your Questions

Comments Filter:
  • by ThePhilips ( 752041 ) on Monday January 19, 2015 @02:39PM (#48851481) Homepage Journal

    I think generic programming is destined to be second class citizen. IMO the human side of the problem is not the biggest out.

    The biggest one is that there is no compiler which can untangle such code and generate something efficient out of it.

    This is basically the attic where all the "everything is an object" languages have stuck. On one side, development is made easier in many places because everything is an object. On the other side, the performance and memory consumption degrade, to the point where developers end up counting and optimizing object usage of every code line and function. Because there are no compilers which are capable of deducing from the human readable code what the hell developer actually wanted to accomplish.

    That brings me to the other bigger issue. Most concepts and paradigm, including the generic programming, which occupy the minds of researchers do NOT help solve the ultimate problem of the computer programming: efficiently communicating with the CPU, efficiently telling it what needs to be done.

    If developer is a writer, CPU is a reader, and assembler is the spoken language, then most simple programs, with 10-50K instructions, are close the novel size. Think of it: the usual "Hello World" program, to a CPU is close in size to the novel! And if it's in an interpreted language, the CPU might end up reading a whole frigging roman, just to deduce that all what developer wanted was to print the "Hello World" on screen.

    • by Beached ( 52204 )

      Generic programming in many languages uses virtual dispatch as everything is an Object, in contrast to C++ templates' that create a class/function catered to the type requested. So, it isn't all bad.

      With C++'s templates, the compiler often produces better code for generic vs non-generic( c++ std::sort vs c qsort). Generic is faster because the compiler can reason about the data type being sorted better and doesn't have to use a type erased comparison function(c's qsort uses a int(void*, void*) type functi

  • by maestroX ( 1061960 ) on Monday January 19, 2015 @02:44PM (#48851509)
    the cat got my tongue
  • I had forgotten about the "Second System Effect" until Alex mentioned it above, but Wikipedia's description of it [wikipedia.org] as

    the tendency of small, elegant, and successful systems to have elephantine, feature-laden monstrosities as their successors due to inflated expectations.

    sounds like an apt description of C++ and the STL. And I say that as a long-time fan of C++ (less so of the STL...)

    • STL is fine.. but boost? ouch.
    • Some C++ quotes: [cat-v.org]

      "C++ is the best example of second-system effect since OS/360." – Henry Spencer
      "All new features added to C++ are intended to fix previously new features added to C++" – David Jameson
      • Meh, heard them all before.

        The thing is the C versus C++ ones now take on an extra ironic twist. All the top performing C compilers, LLVM, GCC, ICC, (and even VS) are now written in C++. I wonder if it is so superior to C, why people who know both languages inside out prefer writing C++ to C?

        Also, despite being so "crap" and "full of bloat", it's used to write much of the world's highest performing code---along with FORTRAN.

        • The thing is the C versus C++ ones now take on an extra ironic twist. All the top performing C compilers, LLVM, GCC, ICC, (and even VS) are now written in C++. I wonder if it is so superior to C, why people who know both languages inside out prefer writing C++ to C?

          I also prefer C++ to C. C++ has obvious advantages in terms of easing the creation of complex systems such as the compilers you mention. Conversely, the primary advantage of C to C++, in my experience, is that it helps prevent the creation of overly complex systems. I once worked with someone who created layer upon layer of classes in C++, where each class did almost nothing. In fact, it was sometimes hard to determine where the stuff that actually did something was in some of those classes. Although o

          • It's just that C++ gives you more of this sort of rope to hang yourself with.

            I've heart this argument, but if you were active in programming between the mid 80s and late 90s when C was really king of the hill, you might have noticed that there were legions upon legions of C programmers industriously manufacturing rope as fast as they could.

            See, e.g. Motif :)

        • All the top performing C compilers, LLVM, GCC, ICC, (and even VS) are now written in C++. I wonder if it is so superior to C, why people who know both languages inside out prefer writing C++ to C?

          Because they are also C++ compilers. Best to eat your own dog food.

        • It is perfectly consistent to say that C++ is a "elephantine, feature-laden monstrosit[y]" compared to C, but still prefer C++ to C. Nobody is disputing that it is feature-laden, after all.

          Stepanov said it himself: Simula-style inheritance is unsound.

    • How so? The STL was the first such library of it's kind.

      Anyway, you are a fan of C++, but not the STL. To me this is surprising: I love C++ because it is one of the few languages that allow things with the elegance of the STL.

      So in your ideal C++ library, what would containers look like? Can you describe a variable sized array, and assosciative container? What about algorithms? How would the interface to nth_element, sort and heap differ?

      • Perhaps I should have left STL out of that. In any case, although I like C++, it feels overly complicated, bloated, and ugly in many ways. I have that same feeling with the STL. For example, I much prefer MFC's "CString" class to std::string. To be fair, though, maybe C++ and the STL are the best that can be done within the constraints that they are designed for, notably, C++ must be backwards-compatible with C, and STL must run on C++.

        If you imagine C++ without templates (as I first learned it), it wou

        • C++ had its problems in development. The worst problem is that it had to keep near-perfect C compatibility, but there are others. I haven't seen anybody defending std::string as a good implementation. I have seen C++ experts saying that it works OK, and that you should normally use it.

          Unlike C++ per se, the STL parts are conceptually pretty clean. You have containers, iterators, and algorithms, which are fairly simple and separable concepts. They're also usually lightweight in practice. You can of

  • by phantomfive ( 622387 ) on Tuesday January 20, 2015 @01:10AM (#48854287) Journal

    I am still convinced that Simula/C++/Java style inheritance is unsound.

    Does anyone understand what he meant by this? I wish he had expounded or expanded on the topic.

He who has but four and spends five has no need for a wallet.

Working...