some technical questions

Posted 9 Aug 2000 at 23:50 UTC by graydon Share This

just trying to liven things up. getting sick of top level articles about the socioeconomics of copyleft.

does digital watermarking work in any reliable way when you resample a watermarked signal. if so, how?

how does linux's new capabilities system differ from a "serious" one like EROS, and if it's significantly different, why?

does anyone know why the hell this claims to work?

has anyone had any experience with free radio FM microbroadcasting?

what are the barriers to doing active noice cancellation in software using stock sound cards and today's free software?

have there been any serious attempts at issue-by-issue national direct democracy, and if so what were the results like?

has anyone here built a large site using a common lisp httpd rather than, say, some zope or php thing? did it work out well?

what happened to nat's grope project?

are there sufficient species of edible perennials or permacultural plants that we can shift to less destructive agriculture? and if so, what is holding us back?

are there many (any) programming languages based on natural languages which are not english?


Human/Machine interaction, posted 10 Aug 2000 at 00:09 UTC by jmelesky » (Apprentice)

does anyone know why the hell this claims to work?

Well, it claims to work because of the evidence that it happens. To quote from one of their pages.

The observed effects are usually quite small, of the order of a few parts in ten thousand on average, but they are statistically repeatable and compound to highly significant deviations from chance expectations.

I imagine what you're really interested in is why it does work, which is (at this point) as much a theological question as anything else. It's complicated by the fact that such things can actually have an effect in reverse time. That is, you can theoretically effect an outcome that happened yesterday by thinking about it strongly enough today.

It ties into all sorts of basic scientific assumptions, like the nature of causality, the subject/object dichotomy, etc.

Sorry to address the least technical of your technical questions...

Not much relation with natural languages, posted 10 Aug 2000 at 00:15 UTC by Zaitcev » (Master)

Couple of factoids for you:

I saw programs written in "Russian C", with cpp hacked to accpt "#OPR GLOBALY.S" containing a bunch of "#define bukv char", "#define dlin long", etc. Pretty long programs too.

The SAM system S-300 (NATO codename SA-11, I think...) is controlled by a computer called "5Ya-26", with the assembly done entirely in Russian. Some of instruction mnemonics there have interesting connotations. ;-)

German programming languages, posted 10 Aug 2000 at 00:22 UTC by jhermann » (Master)

are there many (any) programming languages based on natural languages which are not english?

M I K R O N I A: eine deutsche Programmiersprache mit Mikro-Compiler

And I guess Microsoft's attempt at localizing PostScript commands in some version of Word does not count (but had interesting effects when sent to the printer). :)

cl-http, pirate radio, posted 10 Aug 2000 at 00:25 UTC by dan » (Master)

has anyone here built a large site using a common lisp httpd rather than, say, some zope or php thing? did it work out well?

ww.telent.net runs on Araneida, which is at least a common lisp httpd - but not the one you have a link to there. Seems to work, but I'm unconvinced as to how effective it is when hit with slow connections and/or multiple connections, so I put an apache in front of it to shield it from the big bad internet. When I get around to writing the date parsing stuff I can make it respond appropriately to conditional GETs too, then the proxy can cache properly.

has anyone had any experience with free radio FM microbroadcasting?

Cripes! 50 watts is "low power"? <yorkshireman number="4"> When I were a lad we had a tenth of that, and we were grateful for it </yorkshireman>. Ahem. So I hear. I have never ever done anything illegal, and certainly I would never be posting about it on Advogato if I had.

Re: Human/Machine interaction, posted 10 Aug 2000 at 00:56 UTC by stefan » (Master)

It's complicated by the fact that such things can actually have an effect in reverse time. That is, you can theoretically effect an outcome that happened yesterday by thinking about it strongly enough today.

not so hasty: there are some serious problems with this. Assuming it was true, i.e. we were indeed able to modify yesterdays happenings by todays thought (however small), isn't this per se completely incompatible with the very basics of the 'scientific approach', inclusively what we call an 'experiment' ?
The problem seems a vicious circle, to be 'provable' by any modern definition of the term, something has to have some very basic properties such as observable, repeatable etc. Think of a typical experiment: you set up a controlled environment, inclusively all the measuring devices, then you start the experiment. Now in your case above: you choose an effect, wait a couple of days, then you prepare the environment to match the effect you observed.

No, I'm not trying to mock any attempt to find violations of causality. But you have to find a coherent approach to attack the problem, don't try to fit a square into a round hole.

And to it's relation to theology: there can't be any. Theology, by definition is a disjunct domain to science. There were a couple of attempts to prove or disprove god. People still don't understand that anything provable by definition is objective, so if god ends up being proven, it isn't the same god any more christian theology is speaking about.

Direct Democracy, posted 10 Aug 2000 at 01:15 UTC by Talin » (Journeyer)

First, you should read "The Outcasts of Heaven Belt" by Joan Vinge. One of the societies portrayed in the book is the Demarchy, a nightmarish dystopia where citizens vote on individual issues moment-by-moment. In this society, there is no reasoned discourse, no contemplation or reflection on the issues - whatever the mob's passion is at that moment, that's the law of the land. The leaders are little more than glitzy tabloid salesmen, pandering to the whims of the moment.

Alexis de Tocqueville, in his classic Democracy in America points out a similar phenomena. At that time (shortly before the civil war I think), the U.S. Senate was elected by the state legislatures, rather than by the populace. His comment was that when you went into the Senate, it was populated with famous writers, generals, historians; While at the same time the House, which was elected by the people, was filled with vulgar nobodies, opportunists, that sort of thing.

I highly recommend de Tocqueville, I really think of it as basic political science for hackers. Many of the problems we face on the 'net today (such as Tyranny of the Majority in moderated discussion groups), were first identified by de Tocqueville. (Although clearly we know better about many things today - his argument about American Indians not being farmers was way off base.)

I think that the ancient Greeks may also have experimented with this, although a lot of their experiments ended in failure

Note that the U.S. wasn't the first attempt at a federated republic, although all of the earlier attempts also failed as far as I know.

The lesson that I take from all this: Creating a stable political system is hard, and no-one really knows how to do it yet.

(Sorry if this isn't technical enough for you.)

Plants and Watermarks, posted 10 Aug 2000 at 01:17 UTC by darkewolf » (Journeyer)

does digital watermarking work in any reliable way when you resample a watermarked signal. if so, how?

The answer to this is generally no. Most digital watermarks are kept within the file (ie: in the high bits of an image). Although it is possible to mark an image so that certain colour indexes (ie: colour 2345 which might be blue or red or purple at the time), create the water mark.

Reason this will break in resampling is that high-order bits are often lost (or may vanish on file-type conversion). And certain colour indexes will be lost between conversions/sampling.

are there sufficient species of edible perennials or permacultural plants that we can shift to less destructive agriculture? and if so, what is holding us back?

Yes there are. There are hundreds of plant types we could grow in a permaculture system that would not only be less destructive, would start repairing the damage we have done.

Why don't we use them? Industry marketing. To use these food stuffs properly we would have to have ungenetically modified versions (which exist). The plant industry does not want us to use them, cause they can't make a sustained profit off them. Plants produce seeds, seeds can be given to your friends (Yes, open source exists in the food industry).

The only other reason why we don't use sustainable food culture is that culturally we expect certain food types and expect to be provided with food from shops. Personally, my household grows most of our food, but its a lot of effort, and takes a fair bit of space (we have 8 members of the household).

Re: Human/Machine interaction, posted 10 Aug 2000 at 04:20 UTC by jmelesky » (Apprentice)

stefan: The point that i was trying to make when using the word "theological" was that the results are entering realms that traditional science is incapable of dealing with, since traditional science relies on things like the subject/object dichotomy and the ever-forward direction of cause-effect reactions.

Assuming it was true, i.e. we were indeed able to modify yesterdays happenings by todays thought (however small), isn't this per se completely incompatible with the very basics of the 'scientific approach', inclusively what we call an 'experiment' ?

Yes, it is incompatible. Which is what makes it so fascinating (to me). The notion of science being used to find fundamental shortcomings in science is certainly not without irony.

you choose an effect, wait a couple of days, then you prepare the environment to match the effect you observed.

Well, in the experiments that i read about (from Colgate, i think, though it's been a few years), they were a bit better about it than that. The experiments were based around a mechanical device which would generate (unattended) an even distribution of ones and zeroes. They would ask a subject to influence the outcome in a given direction ("Subject A, please make the machine make ones."), and observe the results. When they actually came to the point where they were considering reverse causality (or whatever you want to call it), they would run the machine for the appointed amount of time, sealing the results without viewing them. Then, a day later, "Subject A, please make the machine make ones yesterday." Only once that process was complete would they actually view the results (very Schrodingerian) and determine whether influence was exerted.

When i read:

These anomalies can be demonstrated with the operators located up to thousands of miles from the laboratory, exerting their efforts hours before or after the actual operation of the devices. (ibid)
i assumed that the Princeton research had encountered similar phenomena. Perhaps i'm reading too much into that. Or perhaps my belief that they may have encountered those phenomena is actually somehow increasing the probability that they did. :-)

Russian BASIC, posted 10 Aug 2000 at 06:33 UTC by strlen » (Journeyer)

There's other Russian programming languages, other then what Zaitcev have mentioned. Iskra, a C-64 like personal computer for example, utilized a BASIC language, which was not only controlled in Russian, but also utilized the Cyrillic Alphabet. I have a 1989 book on Computer Gaming, somewhere on my book shelf, in Russian. It lists a lot of code in the language.

Princeton Engineering Anomalies Research, posted 10 Aug 2000 at 07:06 UTC by rillian » (Master)

How does this work? Generally, it doesn't. Their "small but real" effects are misapplications of statistics, or wishfully low acceptance thresholds. Mulder's poster. To be fair, I didn't read their publications, but I didn't see a wealth of peer-review papers (there have been some) and I've read books by similar outfits in the past.

If it does work as claimed, it's almost certainly through new physics. Certainly the smart and famous Roger Penrose is thoroughly convinced that consciousness (or more properly, the part of human consciousness that identifies mathematical truth) can't be explained in terms deterministic physics. (And he's also been roundly criticized for it.)

Other chinks suggest themselves. There's the anomalous acceleration of the Pioneer space probes. The persistent rumours of a fifth force. If you like extrapolation by analogy, the anomalous precession of Mercury's perihelion was a major factor pointing to problems with newtonian mechanics and one of the first experimental verifications of general relativity. I've never heard a convincing explaination of what happens when you look at Schödinger's cat. More concretely, there's the gap between quantum mechanics and classical gravity, and everyone's general unhappiness with the empirical nature of the standard model. There's even been hope from recent research that quantum gravity may be something we can practically experiment with. (q.v. the Sept. 2000 Scientific American)

Many of us hope for new physics. But whether whatever comes next will produce souls, psionics, time travel or ansibles is anyone's guess.

q.v. (pedantic), posted 10 Aug 2000 at 08:08 UTC by schoen » (Master)

"q.v." ("quod vide") means "which see" (not "for which see") -- i.e. "which you should read". If you just want to tell someone to read something, you can use plain "v." ("vide"), which means "see". So you can say, e.g., "V. the article in Scientific American". An alternative is "cf." ("confer"), "compare".

If you can make "q.v." stand for "quoque vide" (see also), you'd be OK, but I don't think that's standard.

Of course, I'm the sort of person who complains when people say "Author: id." (because "idem" has grammatical gender of neuter, so I think it should be applied only to works, not authors) -- I would have everyone write "Author: isd." or "Author: ead." ("isdem", "eadem").

Never mind when people use "e.g." for "i.e." or "i.e." for "q.v." -- let alone the dreaded "ect.".

Um, I did have some comments on these actual questions, but I left them in my laptop, so I'll have to try to post them tomorrow.

French LOGO, posted 10 Aug 2000 at 09:02 UTC by hadess » (Master)

The Logo programming language was translated to French in the middle '80s to run on the Thomson TO7 computers (Education bought shitloads of them, they were ZX81 with 128 Ko RAM and Microsoft Basic, my first computer, sniff).

An implementation for Gnome exists.

Capabilities in Linux, posted 10 Aug 2000 at 09:55 UTC by brother » (Journeyer)

The two kinds of capabilities tries to solve two different problems. The capability system in linux is build on the formerly posix.1e draft which got withdrawn (and therefore is available (somewhere)).

Posix capabilities tries to solve the problem with an all-mighty God (aka. root) and the concept of either being all-mighty or a luser. Instead of letting you httpd being run by an all-mighty user we only gives it the capability to connect to ports <1024 (CAP_BIND, I think). Then the only harm a httpd security hole would have is that you httpd installation could be (partly) destroyed and the evil cracker coudl block all you ports.

Implementation details of Posix capabilities is ``just'' replacing all the if-root-P in the kernel with some does the user have capability 10.

Real capabilities tries to solve a more general problem: Access to resources of any kind. Instead of controling access to files on the levels (u)ser, (g)roup and (o)thers, you can control which users that should have access to you files. This can be implemented by some sort of unlimited, dynamic and user-controled groups.

But it gets even better. You can control it to the level of processes. You can give you users rights to read /etc/passwd through login(1) and passwd(1) but not with any other programs.

Capabilities and persistence is cool. But I don't know anything about it. But I would like to play around with it, but Eros seems to be on the kernel-hackers only level.

When I was child I've played whit a danish programing language in school and the first translation of K&R's C-book the programing examples was translated (never seen an compiler for that though).

PEAR and new physics, posted 10 Aug 2000 at 10:08 UTC by mettw » (Observer)

The way you get the results PEAR have is to have a standard deviation of 7.075 and then claim significance in a score of +0.028 from chance. It also helps if you ignore the fact that when the subjects were trying to have no effect they scored +0.015 from random.

On gravitational anomolies: It should be noted that Einstein did not come up with an equation when he created general relativity, he came up with an infinite set of equations and picked the simplest one. There is no reason why the simplest one should match reality and Einstein himself moved to the next simplest one when it was pointed out that his model would mean that the universe is expanding (His cosmological constant mistake). The anomolies with the simplest equation can all be accounted for by simply choosing a slightly more complicated equation.

On Schoedinger's cat: I wouldn't pay to much attention to Paul Davies et al - just because you are a good physicist does not mean that you are a good philosopher. Trust me on that one, I did a degree in physics and have heard a lot of very intelligent men sprout a complete load of gibberish when wandering into philosophy. Whenever you hear people talk about the philosophy behind quantum mechanics you need to keep a few things in mind:

  • With quatum mechanics making an observation imparts an enourmous amount of energy to what you are observing. So you are never observing the electron, you always observe the interaction of the electron + the photon used to observe it. Actually this is true of all experiments, even classical ones, but with classical mechanics the interaction is so small relative to what is being observed that this interaction can be ignored - no so with quantum mechanics.
  • The question `What happens when we are not observing it?' is inherently unanswerable.
  • When people claim that an electron takes every path to the screen they're talking out of their arses. There are three mathematically equivalent formulations of quantum mechanics and each one says something different about what happens between observations. You need to always keep in mind that these images dreamt up by physicists are just mental tools used to help them deal with the complicated mathematics. They are all inherently unverifiable (hence the existence of three different stories) and therefore do not qualify as science. Treat these stories as a tool to help you work out what the result is going to be - not as gospel about what is happening between observations.
  • Quantum particles are not both waves and particles at the same time, they are something else entirely that have some properties of a wave and some properties of a particle. That our mind has trouble getting a grip on such entitites does not mean that they can't exist.

Grope..., posted 10 Aug 2000 at 16:06 UTC by cbbrowne » (Master)

The home page seems long dead...

I remember the ALS talk; I'm the one that suggested the idea of using Tabu Search, which regrettably got misspelled by everyone...

A recent query got the following negative answer which suggests Nat, as a HelixCode principal, may have gotten too busy.

The other reason I expect that Grope hasn't gone too far is that it is highly dependent on hacking in some pretty serious instrumentation to GCC, which has, over the last couple of years, seen some pretty massive rearchitecting. That just plain makes it tough.

Why PEAR continues to "work", posted 10 Aug 2000 at 17:49 UTC by danwang » (Master)

PEAR "works" because the guy running the lab really used to be dean of the engineering departement at Princeton. Somehow got a big grant during the time that cold war fears made people think this kind of "research" was important cause the Russians were doing it. Most importantly because "academic freedom" is being used as an excuse at Princeton to not "make waves" and do something sensible with crackpots like this..... IMNHO

If Princeton got enough real bad press about this... (or some alumni got bothered by it) I'm sure PEAR would go a way....

It's pretty goofy, but..., posted 10 Aug 2000 at 19:31 UTC by jrb » (Master)

are there many (any) programming languages based on natural languages which are not english?

There was a forth-like programming language on freshmeat a few weeks ago called Var'aq. It is based on Klingon, and as a result, should probably be viewed primarily as an exercise in linguistic creativity...

Digital watermarking and resampling, posted 10 Aug 2000 at 19:59 UTC by Raphael » (Master)

graydon asks:

does digital watermarking work in any reliable way when you resample a watermarked signal. if so, how?

The answer is: it depends... Digital watermarking, like steganography, can be applied to many types of contents: still images, videos, sounds and even plain text. You did not specify which one(s) you are interested in, but the principles are similar. Resampling is not really a common practice for plain text (well, that can be discussed...) so I will focus on images and sounds.

Disclaimer: I'm not an expert in this domain. I'm just playing with some of these concepts from time to time, but that's it. So don't be surprised if my explanations are a bit vague or if I do not use the most appropriate terms when I describe something.

As explained above by darkewolf, some early attempts at watermarking images were a bit naive and consisted (for example) in replacing the color of some pixels in an image by a similar color that looked almost identical to the eye but had a different index in the color palette (e.g. for GIF). By extracting some bits from the color index of some pixels in the image, you got a stream of 1's and 0's that could contain the watermark, including some kind of standard tag and a CRC to avoid false positives. A similar and equally naive method of watermarking uncompressed sound files (e.g. WAV) consisted in changing the least significant bit of some samples so that you would not hear the difference. Of course, these methods do not survive any kind of resampling or even most lossless conversions to other file formats. I doubt that any of the current products is still using such a poor method for watermarking.

Most of the current methods of watermarking are based on small changes in the coefficients of a decomposition of the signal in frequencies (like a Fourier transform or the DCT representation for JPEG images). For images, this is done in the spatial domain and for sound files this is done in the temporal domain. By applying some small changes to some coefficients, you can embed a watermark in an image or a sound file, even if they are compressed.

These methods can survive some kinds of resampling. For example, the low frequencies averaged over large blocks of the image or sound will be almost unaffected if someone takes every second pixel of the image or if they resample the sound at half of the original sampling rate. The watermark will still be there because it is spread all over the image or sound. On the other hand, if you hide your watermark in the medium frequencies and repeat it several times in smaller blocks of the image, then all copies would probably be destroyed by the previous resampling but would be preserved if someone crops the image (without rescaling it). The same applies to sound files if only a part of the sound is extracted: as long as this part contains at least one full block, you will be able to detect the watermark. I think that the current watermarking products are using a combination of both: modifications of the medium and low frequencies computed over small and large blocks, in order to survive most of the simple transformations that could be applied to the protected contents.

There is always a tradeoff between the robustness of the watermark and the quality of the results when compared to the original contents, but I think that some methods can be fairly resistant.

sustainable agriculture, posted 10 Aug 2000 at 23:17 UTC by Ricdude » (Journeyer)

are there sufficient species of edible perennials or permacultural plants that we can shift to less destructive agriculture? and if so, what is holding us back?

Monsanto, Inc., foremost provider of genetic mutant^H^H^H^H^H^H engineered food products. Also the company responsible for Agent Orange. Coincidence?

Re: q.v. (pedantic), posted 10 Aug 2000 at 23:19 UTC by rillian » (Master)

schoen, thanks for the correction, I think. I was trying to say "see also" which I thought was ok for q.v. I didn't follow your explaination of how "quod vide" differs from that.

But I'm all for proper usage and appreciate the correction.

Schrödinger's Cat

mettw, I was referring metaphorically to classical-quantum correspondence, measurement theory and what's generally referred to as "wave function collapse" in undergraduate physics courses. I've never heard a convincing explanation of how the quantum laws for probability distributions reduce to deterministic classical physics as &hbar; goes to zero. Nor have I heard a convincing argument that the question doesn't mean anything.

Everything you say is true, but I don't think the Copenhagen Interpretation ("we're not going to worry about it") is the end of the road. There certainly has been a lot of philosophical nonsense around QM, but that doesn't mean we can't invent pictures to guide us. Surely you've heard the scaffolding argument? We need those little stories to get to the point where we can write the equations, and to help us understand what they mean, the way an arch needs external support until all the stones are in place. This is the part of reasoning that Penrose is arguing about, and I don't think we understand it very well.

More on QM, posted 11 Aug 2000 at 00:37 UTC by mettw » (Observer)

I agree with you on scaffolding. This is what the pyschologist Hans Eysenk called a weak theory. It's weak in the sense that it's completely unverifiable because it doesn't really make any falsifiable predictions - The use of a weak theory being that it guides research into areas that it may not otherwise have gone.

But I don't think all of this talk about multiple universes and so on even qualifies as a weak theory. With a weak theory in psychology there is a hope that it will eventually be replaced by a strong theory, but I don't see that in the QM interpretations. Because of the interaction problem you can't make an observation between the end points without changing the experiment entirely. This means that the qustion of what happens between experiments is unanswerable, and is therefore not even a logical question.

What happens with the shift from QM -> classical is that the degree to which we can ignore the interaction between the object and our observation aparatus increases untill, around the chemistry level, we can for all practical purposes pretend that there is no interaction at all.

This is all a question of what we can know. With QM you can't know what is happening between the end points because you change the experiment if you try. Infact you can't even properly distinguish between an electron and a photon because we are observing the interection between the two, not each individually. So you shouldn't see the wavefunction collapse as a real phenomenon as there is inherently no way to verify a wave function collapse over a lagrangian path integral over a quantum leap over strings over ... These just aren't scientific questions, so we are forced to resign ourselves to fact that with quantum mechanics we only have a statistical theory and our conceptualisations of what is happening are usefull, but unscientific. Indeed, a question that can not be answered is not even a logical question.

Sorry.., posted 11 Aug 2000 at 00:44 UTC by mettw » (Observer)

I missed your question about how a probabilistic theory goes to a deterministic theory. This is akin to thermodynamics, in that there is an extreemely small chance that all of the oxygen in the room will suddenly shift to one side and I'll suffocate. But the chance of this happening is so small that I can say that it will never happen.

Likewise, the probability that an electron will make a macro sized jump is so small that you can assume that it will never happen. Therefore, as you move into the macro world the random behaviour of quantum particles becomes relatively smaller and smaller untill you can ignore it completely. At the macro level the random behaviour of the quantum particles is so small that it has, usually, no observable effect.

Quod, posted 11 Aug 2000 at 01:05 UTC by schoen » (Master)

Quod means "which" (there's another quod which means "because"). The word for "also" would be quoque.

Since "quod" can't be "also", "q.v." can't be "see also".

Now, the unasked question here is why we can't make up our own Latin acronyms. I've done that, but we have a problem when an acronym has a long-established expansion (at least since medieval times, in the case of scholarly/critical Latin) -- much in the same sense that people don't usually name their own programs "ls" or "cat", even though you could and Unix would let you.

Quantum Foresight, posted 11 Aug 2000 at 07:09 UTC by ncm » (Master)

In connexion with graydon's question about the Princeton Engineering Anomalies work...

It appears that there's nothing in quantum mechanix^Hcs that contradicts perfect foresight -- as long as you can't do anything about it. Dreaming that an airliner will crash, and not getting on and thereby surviving, is fine. If it were a specific problem you could call and warn about, you won't, because then there's no "loud" future event to propagate back.

This ties in nicely to fatalistic Greek traditions. Cassandra can't foresee anything about anybody who will pay her the slightest attention, but everybody who ignores her is an open book.

POSIX "capabilities" are a terrible misnomer!, posted 12 Aug 2000 at 20:58 UTC by ping » (Master)

POSIX "capabilities" are not capabilities at all. In fact they used to be called privileges, which is a much better word to describe them, since they represent in effect "temporary permission to break the rules" (i.e. special privileges). It is extremely unfortunate that the POSIX people chose to co-opt this established and meaningful term, dooming the security community to everlasting confusion about perhaps the most important concept in security of all!

True capabilities, as in EROS, form a pure object-oriented security model that is at once simpler, more efficient, and more provably secure (precisely because it is simpler!) than the messy ACL (access-control list) style security models you see in place more commonly.

An ACL presumes that the designer of the system can foretell in advance all of the distinctions of authority that users will ever want to make, and can encode it in a fixed set of permission bits. It is hence doomed to failure: when you want to give away just a little authority, you are forced to give it away in huge chunks. Take, for instance, Unix: you cannot give a program just the capability to manipulate a particular file; rather, you must run it will the full authority of your userid, with which it can do untold damage.

In a capability system, you wouldn't hand the program the authority to be you; you would simply hand it the capability to write just the file you wanted it to write. Since these capabilities are implemented as object behaviours, this amounts to nothing more than handing the program an object reference. Further, there is no need for some sort of security manager to intervene and check the requested operation against a list of permitted operations (i.e. what Unix does), which is slow and error-prone; the possession of the object reference itself denotes the ability to perform the operation. Auditing a piece of code for its security properties is then just a matter of examining variable scope to see where object references are in scope and where they are passed.

radio free "BM" microbroadcasting, posted 15 Aug 2000 at 20:13 UTC by stripling » (Apprentice)

> free radio FM microbroadcasting?

This happens every year at Burning Man. A little late for you to go this year, but check http://www.burningman.com/on_the_playa/news_broadcasts/index.html for links and for the listserv.

Uh, it's Radio Free Burning Man, call sign RFBM, so I guess it's BM broadcasting. :->

Phil

re that PEAR consciousness thing..., posted 16 Aug 2000 at 07:56 UTC by phr » (Journeyer)

That's the reason lots of experiments are considered only valid if done double blind. If you're writing down the output of an RNG you're likely to make a mistake every now and then, just by chance. If you're expecting the output to have more odd numbers than even, the mistakes may tend to favor odds.

Resampling text, posted 19 Aug 2000 at 15:13 UTC by cactus » (Master)

Just a little something you might find interesting: there are legends that the postal offices in the east European countries used to re-word suspicios messages, specifically to make steganography impossible (or very hard).

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page