awu is currently certified at Journeyer level.

Name: A. Wu
Member since: 2003-12-23 04:49:21
Last Login: N/A

FOAF RDF Share This

Homepage: http://awu.livejournal.com

Notes:

Email: a.wu at acm.org

Grad. student in CS

Technical Interests:

Graphics, vision, data mining, machine learning, computational geometry, algorithmics, entertainment technology, XEmacs

Undergraduate projects: https://netfiles.uiuc.edu/awu/www/cluster/projects.html

Computer graphics work: http://graphics.cs.uiuc.edu/~awu/

Recent blog entries by awu

Syndication: RSS 2.0

I migrated some of the gfx links below from graphics.cs.uiuc.edu to a new host:

http://awu.textdriven.com/word/index.php?cat=3

New libre source for data mining on large data sets:

CloSpam -- closed frequent pattern mining, released under the University of Illinois/NCSA Open Source License.

Given a database of transactions, we would like to find frequent patterns in those transactions. For example, if we are running a large retailer, and maintain terabytes of customer transactions, we would like to find what items are bought together. In the data mining community, this is known as "frequent pattern mining".

There are an exponential number of possible frequent patterns in a large database, and thus it becomes important to avoid checking all such patterns.

CloSpam is a fast implementation of CLOSET, a recent data mining research paper in SIGMOD by Jian Pei, Jiawei Han, and Runying Mao.

A proposal for trustworthy e-mail. I mentioned this idea when I (re?)thought of it, sending an e-mail to M-manda a short while ago (I guess I shouldn't really use M- here, since most people don't speak native (X)Emacs).

Sendmail creator EricA [1] suggests that "a small number of people are polluting a great medium", e-mail. He argues that spam makes economic sense, at least for the senders, and points out that current approaches put the burden on the shoulders of the recipients, rather than the senders.

Instead of arguing for direct permission-based mail, EricA believes that the only viable long-term solution it to "make spammers pay more than we do".

I suggest an alternative, which may or may not be feasible, or new, for that matter. A trustworthy e-mail can be sent between Bob and Carol iff there exists a connected path, on some social network, of length at most k for some small constant k.

That is, instead of having to give permission to every possible sender, you have to only give permission to a smaller 1-ring of people you trust. Trust is then transitive, to some degree, even though it may decay exponentially along with path length.

All other e-mails may be received, but perhaps get filed under "shady".

I imagine that you can also specify different levels of trust. You could say, I trust Paul to send me e-mails, but not as a gateway from other people. We can also define greater aggregate entities worthy of trust. Communities, organizations, and countries, for example.

Implementation-wise, this requires an integration of social networking software and e-mail systems (or at least interop), which is no small feat for Internet-scale topologies, but maybe, just maybe it could even work.

Back to finishing up conference paper to be submitted to SIGxxx on social networks.

--
1: Wikipedia quote: "There is some sort of perverse pleasure in knowing that it's basically impossible to send a piece of hate mail through the Internet without its being touched by a gay program. That's kind of funny."

24 Dec 2003 (updated 24 Dec 2003 at 04:46 UTC) »

This post by raph on multidimensional interpolation points to a pretty good survey on scattered data interpolation, which is a problem I have tried to tackle in the past (and present, in a way). It would have been nice to see this survey back then :).

It's been a while since I thought about such issues, but there are interesting connections here to various problems in computational geometry and graphics.

raph also points out that the survey does not give clear guidance, one way or another, which is quite understandable given the range of problems people solve with scattered data interpolation.

--

I've been looking through some of raph's recent posts from the perspective of data mining. Data mining is a relatively new field that tries to extract knowledge from large amounts of data. It has roots in database technology, statistics, machine learning, visualization, and algorithmics.

I'm curious how work in data mining could be applied, if it hasn't been already, to work on trust metrics. In particular, what little I've seen of such work seems to often be concerned with directed attacks, trying to prove that a system can robustly survive in the face of would be danger, or at least gracefully degrade.

I'm not sure if work on outlier detection mainly happens in data mining or networking, but there are many difficult problems to solve, it would seem. PayPal, for example, I have heard, tries to detect attacks that involve some notion of a clique or chain. Similarly, insurance companies want to detect if a group of cars form a path of destruction, literally, in that some nefarious gang of thieves has arranged a chain of crashes to bilk the insurance companies.

I have my own intuitions about how problems might be solved, but nothing concrete yet.

I'm not quite sure what to think about "trust". As someone who is not familiar with the literature, when I see this word, I think of sensitivity and perturbation analysis from scientific computing (numerical analysis). Given some system, how much effect can small perturbations in the input have on the resultant output? I think of robust data streams and adversary arguments from theoretical computer science.

In particular, my first thought would be to employ hierarchies as a natural way of understanding how humans trust. Human trust is not so infallible a thing, but perhaps it is a start. Then again, I reiterate my lack of knowledge of the field. I merely give my first impressions.

My initial reaction would be to be wary of trust metric systems that are completely automated, but this is probably too vague a claim. In practice, there is some human intervention.

I do find recent work, on finding ways to differentiate humans from computers, to be an interesting slant on things. Even more funny, perhaps, is that again people just bypass such mechanisms, hiring pasty faced teens to read scribbly numbers or, in general, recognize hidden patterns, something computers can't quite do well yet.

The joke that is often made here is that computers are not adapting to us, we are adapting to them. We modulate our credit behavior so that some computer algorithm thinks well of us, and here we are putting our teens at work, having them live up to their full human potential by clearing the path for spam meisters everywhere.

2 older entries...

 

awu certified others as follows:

  • awu certified awu as Journeyer
  • awu certified raph as Master
  • awu certified ade as Journeyer
  • awu certified judge as Journeyer
  • awu certified MichaelCrawford as Master
  • awu certified brouhaha as Journeyer
  • awu certified jwz as Master
  • awu certified alan as Master
  • awu certified rms as Master
  • awu certified miguel as Master
  • awu certified whytheluckystiff as Journeyer

Others have certified awu as follows:

  • judge certified awu as Journeyer
  • awu certified awu as Journeyer
  • ade certified awu as Journeyer
  • brouhaha certified awu as Journeyer
  • whytheluckystiff certified awu as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page