Brian Kernighan Interview

Posted 4 Sep 2000 at 22:27 UTC by advogato Share This

And the interviews just keep on coming. Slashdot today posted Mihai Bidiu's interview with Brian Kernighan, famous for his role in both C and Unix. There are a lot of interesting issues raised; it's well worth reading. There is also some relevant discussion in the /. article.


Slashdot comments, posted 5 Sep 2000 at 12:10 UTC by mettw » (Observer)

Posting a message on slashdot rarely seems to be usefull to me since the threads only last an hour, so I'll put my comments up here instead.

On their C vs. C++ debate, C++ seems to me to have the stick of both Java and C without any carrot. That is, you get the bloat of Java and the pain in the arse debugging of C. I really can't understand why people would use C++. To me the more sensible way to do OOP is to write your code in Java and then redo the bottlenecks in C through JNI.

On their thread on scheme, I think people need to know more about the theoretical foundations of a language before criticising it. The criticisms of scheme on /. are mostly about its unusual syntax and strict adherence to function name precedence. Someone even described scheme as being too unlike how people think! Declarative languages are unlike how programmers are trained to think, but are more like how we think about a problem specification. For someone like myself who has studied a lot of mathematics, the suggestion that any imperative language could be more like normal thought than functional programming is absurd.

Maybe my head is just still showing the scars of having studied complex analysis, but the mathematical syntactic sugar in Haskell is the closest I've come to a cyber-orgasm.

Interesting Slashdot comment, posted 5 Sep 2000 at 17:15 UTC by Raphael » (Master)

Among the comments posted on Slashdot, I think that this one from "devphil" is the most interesting:

I agree in a big way. Ever notice how the really important people, when asked about their favorite language/editor/IDE/window-manager/etc, usually answer along the lines of, "Oh, I'm comfortable with about all of them; I can switch languages as needed; I use more than one of <whatever>."

I was re-reading a 1995 issue of IEEE-CS "Computer" magazine, and one of the articles was pointing out that bigots and advocates of a single method or a single approach or a single tool (e.g., language, editor, what have you) were invariably beginners and novices with little experience or useful education. Skilled programmers and designers know how to be flexible.

We should always try to use the right tool for the right task, instead of trying do everything with a single tool. I admit that I sometimes used a screwdriver as a can opener, but I will never claim that it is the best tool for this task. We should not be afraid of learning new tools and new techniques if they could help us solving a problem faster.

For example, I am an experienced programmer in C (and Perl, Java, C++, Scheme, ...), having written several thousand lines of (working) code in the last 10 years. But last week I decided to learn PHP and SQL for implementing a prototype of a dynamic web site. I think that it was the right decision. The time that I invested in learning PHP and setting up MySQL paid off because it allowed me to create a prototype quickly. I could have done the same in a CGI script or module using C or Perl, but PHP is easier to use for fast prototyping of small web applications that have to look pretty and use a database or access some external resources. I wouldn't use it for anything more complex, though, because you cannot easily separate the application logic from the presentation logic. But this was the right tool for that job.

That being said, I think that C is a very good compromise for most tasks...

Component oriented aproach as an alternative to C, posted 6 Sep 2000 at 08:30 UTC by ReCoN » (Journeyer)

How about a more component oriented aproach as an alternative to C (=modular) and Objective C, C++ or Java (=object oriented). Component oriented language are much more flexible in their way to send messages, they also tend to have a much higher granularity than pure object oriented languages. (If objects are like electronical parts, then components are like PCI cards containing those electronical parts).

Read:

http://www.ics.uci.edu/~franz/publications/C0009%20StandAloneMsg.pdf

Is component-based design a property of languages?, posted 6 Sep 2000 at 13:46 UTC by eskimoses » (Journeyer)

ReCoN, it's my impression that component-based design isn't an intrinsic property of a language, but rather how one chooses to use a language. Both COM and CORBA are relatively language-independent. QNX is another excellent example of a component-based system: the entire operating system is constructed of components (mostly written in C) that interact using the microkernel's messaging facility. (IIRC, the only other function of the microkernel is to provide memory and process control.)

Object-oriented languages are particularly well suited to a component architecture, since there is a very intuitive mapping of components to objects. The difference is that typically component architectures have been object-based, i.e., eschewing subclassing. Both CORBA and COM promote inheritance of interfaces; I cannot recall off the top of my head where each of them stand wrt the object-oriented notion of inheritance of behavior. In the end, though, just about any language will facilitate a component architecture fairly well.

Haskell?, posted 6 Sep 2000 at 18:41 UTC by sab39 » (Master)

mettw: After reading your comment I took a look at Haskell. My first thought was "Hey, it's ML"... my second was "Hey, it's even better than ML", and my third was "Hey, it still doesn't seem to support inheritance of types". To me that was always one of the greatest shortcomings of ML (and the reason why fn x y => x + y is untypeable in ML, because x and y could be integers or floats and there's no "number" supertype).

I'm also curious about what kinds of task are well-suited to functional languages. While I would never suggest that functional languages can't be used for "real work", my own "real job", for example, consists of writing web applications. Expressing a web application as a function, all I can think of is that each page is a function of type request -> string, and "request" is some kind of composite type. Even then, I can't imagine how I'd actually write a real-world web app using it.

Maybe the place for functional languages is embedded inside imperative languages, or maybe some kind of hybrid language should be developed that supports both models. Functional languages are great for expressing complex calculations, while imperative languages are better at expressing the execution of tasks. So maybe we need to be able to write the sequence of execution in a imperative language, but express the calculations used functionally.

Re: Haskell?, posted 6 Sep 2000 at 22:29 UTC by mettw » (Observer)

sab39:
With haskell you can write
>fn :: Num a => a -> a -> a
>fn x y = x + y
Which says that fn takes any type that is a derivative of the `Num' class (any type that will work with the `+' operator).

On what tasks are best for functional languages, functional languages are capable of doing anything that is computable like imperative languages, alhough I do admit that it requires you to unlearn a lot of methods that are used in imperative languages. I think that learning hurdle is the main reason declarative languages (excluding spreadsheets ofcourse) haven't gained wider acceptance.

For a relatively short programme you would probably do best to stick to what you are most familiar with and write it in an imperative language. With data mining, natural language processing and expert systems you would be best off with a logic language. But for large, complicated projects writing the first version in a functional language and then rewriting the bottlenecks in an imperative language is the most efficient way to go.

More Haskell thoughts..., posted 7 Sep 2000 at 14:44 UTC by sab39 » (Master)

mettw: That's cool, but can you implement your own types that are "subtypes"? Could I implement a type "Complex" that was a subtype of "Num"? How about taking a constructed type (eg type Point = Pt x y) and extend it into a ColoredPoint which has a "color" as well as x and y?

Regarding what programs should be implemented in functional languages, I do have some familiarity with them (albeit only two courses on ML in university). My question isn't about whether you can use them for big programs, but whether there are problem domains where they really don't fit. My example of server-side web applications is one - the only way it can be expressed functionally is request -> response, and defining the request object to be usefully pattern-matchable would be quite hard. It could be done, but it doesn't seem like a "natural fit", even thinking mathematically about it.

Then there are things like running SQL queries and performing database updates. It can be done, but it seems somewhat opposed to the whole principle of functional programming if your function evaluation has a side effect.

Similarly, GUI programming doesn't work without at least some degree of actions having side effects and stored state. (I've wondered in the past how emacs handles this, using a functional language to express what amounts to a GUI...)

I guess it depends whether the complexity of your application is "calculation-bound" or "process-bound" (in the same way that running programs can be CPU- or IO-bound). If your application is about calculating something, you should probably use a functional language. (DeCSS in haskell?) On the other hand, if it's about carrying out a process, you should probably use something imperative.

Since most applications are both (eg taking DeCSS and actually using it to play a DVD, which is a process), I wonder if some hybrid approach really would be better.

Functional languages, posted 7 Sep 2000 at 20:47 UTC by inf » (Journeyer)

sab39: From ML you can create subtypes by creating a signature for a structure of operators over that type then do most of your work in functors. I've written a few medium sized programs like this and find it to be nice from the "build everything up from scratch" point of view. If you look at the SML Basis library they have signatures for INTEGER and REAL, although they still like to keep the two separated. This is so that you can have Int32, Int64, and varying precision of Reals. With a little work you could make up a NUM signature that fits both of them (+,-,>, REAL has many functions such as floor and ceil that are useless on ints, you need to decide how to deal with that.).

I've always found it interesting to create, say, a numeric computation package in the following way. You create a conjur up a Real arithmetic structure usually by borrowing the default, then from that build a Complex arithmetic functor, and a Polygon functor which you use for determining intersections. Add in any other working types you need, etc.

Your core module will be a functor which takes in the Real arithmetic module, the Complex arithmetic module and the Polygon module stating where Complex.real_t = Real.t . This lets you enforce the fact that you want your Complex module to be using the same reals you're using for your other arithmetic. You can also then change your entire underlying representation of Reals so long as you can conform the the signature, and have both implementations floating around (very similar to passing around subtyped objects.). Subtyping comes from adhereing to the same signature rather than being a descendant of an object or interface. This only works at the module level, though. In reality, you're working by passing in a type and all the functions needed to manipulate it.

So in that way you can create a function which squares numbers to arbitrary powers if you also pass in the function to do the multiplication.

Side-effects are still an ongoing philosophical issue for me. ML, at least, has all the usual imperative features. Wether this is a good or bad thing I havn't decided. Haskell's monads seem like an interesting solution although I'm not sure how they deal with, say, handing a unix signal. A full discussion on monads and side-effect free programming would be very interesting indeed.

The usual ML program seems to consist of a largely functional computation section and a small imperative 'driver', usually something that opens the input file and passes the descriptor to the functional section.

The usual domain split of functional/imperative programming, at least philosophically, falls at machine control. If you don't need instruction by instruction control of the CPU, control of organization of memory, etc, functional languages fit the domain. Prewritten libraries and well known programming techniques, though, are another matter.

trying new tools, posted 13 Sep 2000 at 19:39 UTC by brlewis » (Journeyer)

Raphael, for building a dynamic web site with SQL, I think you'd find my BRL provides for even faster prototyping than PHP. There are other features PHP has that BRL doesn't, but BRL was designed especially with database-related web apps in mind.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page