How the internet is failing research, or is it the other way around

Posted 8 Feb 2005 at 06:28 UTC by na Share This

Last night I listened to a lecture from the dean of the College of Education at University of Central Florida. Her lecture was about the grants the college has been winning, how they're being used, the restrictions grants have on them, and how important research is.

She started talking about how research benefits K-12 education, as you might expect from a College of Education dean, but all I heard was the breakdown between grant research and information dissemination on the Internet. The Internet has certainly not failed in performing secondary information research for K-12 schools. It hasn't failed as a commerce medium or distance education tool.

The dean directed much of her lecture to Ph.D. students. In doing so, she described one of the unfortunate problems with grants. It gave her an opportunity to brag about a new endowed chair, in an attempt to contrast grant problems. As part of describing how many millions of dollars the college has received for various projects and how much research that has allowed the university to execute, she encouraged Ph.D. students to head to the 4th floor of the new teaching academy at the university. The 4th floor of the UCF Teaching Academy is where grants for the College of Education are managed.

Dr. Robinson, the dean, cited politics, economics, competition, excessive spending restrictions and other factors in her lecture for reasons why grant research often stops abruptly. As a result, the grant floor in the Teaching Academy has quality, valid, and useful data, sitting idle. I realize many grants have confidentiality clauses involved, however certainly a worthwhile chunk of the half-baked research is public domain.

Last semester I worked on a semester long research and implementation project. It involved following the Waterfall Systems Development Lifecycle, followed by some evolutionary prototyping, testing, and polishing of a VB6 program under an overly optimistic schedule. We used a Yahoo! Group to organize our mailing list and file uploads, which was quite effective for record keeping. Version control was an issue we just dealt with clumsily. The project was based on a variation of a research project at different, more prestigious university. The original research came to a lackluster, incomplete conclusion, leading to the inspiration for Dr. Richard Johnson at University of Central Florida School of Business to continue it in more detail. The Dr. Johnson was such a backstabbing, two-faced, unethical professor that I don't expect to see my name in the publication, but that's another tangent. Many scores of hours were spent re-inventing the wheel on much of the original project. The original software and research were all disposed of, with the exception of the final publication, which had only qualitative content. In a research utopia, the information would all be freely shared after publication for others to cross validate the results against other measurements and continue on any specific tangents of interest of other researchers. I'm only slowing learning about the Ph.D. research club, but it seems that after the researchers are done hording their results until publication that for the general welfare of Research, the research data and materials are supposed to be available for Society - especially in the case of university research.

I got the feeling that Dr. Johnson was genuinely remorseful that she was heading a grant department, full of data from incomplete grant projects, which was not being used for the advancement of Society. So I pose some questions, which may only have utopian answers. How can we as we as instructional technology, information technology, computer science, and other related research, publication, and OSS professionals minimize the amount of ``lost'' research? I know during the research process, researchers want to claim results as their own; therefore, posting progressive research updates to the Internet is unlikely. As I understand Internet history, one of its primary reasons for existence is to help solve this very information sharing dilemma. Would it be necessary for university grant departments to join together in a kind of dead grant repository? Would such a goal just be missing sponsors, is it a software need, or is it just a natural fact of human reality when a project is killed to not spend time on it any longer?


Oops, posted 8 Feb 2005 at 06:37 UTC by na » (Observer)

There's a typo. I meant Dr. Robinson in the final paragraph.

Version annotation and control, concurrent development, posted 8 Feb 2005 at 09:48 UTC by mirwin » (Master)

Wikimedia.org has software which addresses many problems inherent in concurrent engineering. Perhaps if you simply must keep your publicly funded research secret from the funding public and other unethical researchers you can use a private installation of Wikimedia.org's GPL'ed wiki software and then when the project is complete, unfunded, or temporarily dead put the server, with its internet accessible database, outside the firewall onto the publicly accessible internet for others to play with.

Perhaps some of our security experts could coach you on how to timestamp, encrypt and then pay an external agent to certify your data dumps periodically to assure precedence when some independent researcher invents the next best thing by starting from your public results but refuses to share credit.

Good luck!

Personally I percieve that this is a serious problem that is holding back economic development and medical research worldwide while we have major wars and totalitarian regimes dedicated to enslaving their own populations and attacking adjacent neighboring societies.

If President Bush were serious about our war on terrorism he or his advisors would be seriously considering declassifying all U.S. publicly funded data and technologies, challenging the U.S. population and supporting allies to go all out, and demonstrate to totalitarian or criminal regimes, terrorists, and criminals worldwide that freedom loving peoples and citizens are dangerious prey and enemies.

Unfortunately this approach would require a lot of fat cats to get off their fat asses and actually earn their pay and self awarded honors and congratulations so it is definately a hard sell.

Once again... Good luck with your little piece of it. You might point out to academia that in certain quarters P'hds are becoming as popular as lawyers but without the ever present fear of large government people can do their own research on the internet in various sized teams. After all, the failures do not matter (in law they do matter), one simply keeps and exploits the successes to fund further failures in search of greater success.

In other worlds they (ivory towerphilic academia) better research, learn, or get out of the way. The time is near when we (lessor beings) no longer need their compilation and publishing of useful portions of human knowledge. We can publish it ourselves and use it freely within responsible constraints imposed by the local community or deal with the consequences.

MIT gets it. They are allegedly publishing all of their pedagogical materials for free internet access in an obvious attempt to remain relevant. I need to check back there. It has been a year or two since they announced this and I have been reviewing some fundamentals in preparation for tackling some home schooling graduate or post bac studies. Perhaps your Dean or academic advisor would benefit if you could find a useful link or two relevant to your case that even research information should be free and provide it to them in an appropriately indirect way so they feel it is their own idea.

I'm not doing it, posted 8 Feb 2005 at 17:37 UTC by na » (Observer)

I just got told I was recommended to be accepted to the Ph.D. program I picked out, so I'm not trying to publish about this topic, I just thought it would be something interesting to discuss. I know my big paper would have to be something more managable, not some save the world topic.

Well..., posted 10 Feb 2005 at 19:18 UTC by tk » (Observer)

...looks like there are no easy answers.

Or maybe there is. 0MGZ V0TE B00SH W00T W00T W00T L0LZ!!!!!!!!!!111111

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page