Tuesday, July 15, 2008

Quite an Intelligent Fellow

I've been tinkering with OpenCyc yet again, along with putting together the framework for a chatbot system.My idea is as follows:
Get a word list that allows me to reference the parts of speech for any english word.
Create a system which parses words, sentences, paragraphs, pages, chapters, and books (conceptual, just a large number of related chapters.)
Record the valid structures for each construct (sentences, paragraphs, etc.) Each type of sentence will be correlated to a particular Cyc microtheory. This will allow me to assign a meta-data tag to any given sentence that is known. A Microtheory is a concept in the Cyc ontology system that encapsulates particular ideas that may be factually or semantically different than other concepts. Think of inheritance and polymorphism, but utilized to create a heirarchical view of the world, using hundreds of thousands of basic common sense concepts and millions of assertions related to those concepts. It allows an AI system to use the proper granularity for arbitrary contexts, by drawing on inferences, hypotheses, and static data.The problem with most chatbots is lack of real intelligent flexibility. You can create a billion template/response structures and your bot will imitate intelligent conversation, but what you're really doing is an if/then routine over and over again. It takes only 3 degrees of separation between the original input and confusing the 'natural' train of thought before you hit the limitations of most chatbots based on this structure.However, training a bot over time on actual conversation has some serious drawbacks as well. You end up with rules being created on the fly that may be entirely wrong, with no way of preventing it unless you edit the data manually.My solution is to semantically tag valid english sentence structures, and assign a particular microtheory to each tag, or class of tags. By using those tags in Cyc microtheories, telling the system what questions it has to answer about particular sentences in order to 'understand' them, I can create an input parser that takes any given english sentences and store them as semantically valid data constructs.Once stored, I will again turn to Cyc and classify sentence types, and potential 'proper' responses for particular sentence classes. Statements, queries, imperatives, and so on will each have responses appropriate to their semantic data. I will program responses and link the response categories to each sentence class.So I have a concept for a chatbot system which can take arbitrary (syntactically valid) english sentences, understand them by linking them to an ontological database and creating new concepts as necessary, and return a response based on the actual content of the sentence.I've gotten a few Cyc microtheories drawn up. I have a 210,000 word 'parts of speech' database. What I need now is ideas for large quantities of text which I can parse to get as broad a range as possible for sentence, paragraph, page, chapter, and book structures.Does anyone know where I can find really large corpora?A potential offshoot of this is classification of corpora, and using each class as inputs in a neural net, in order to train patterns specific to styles of writing and genre... so you could create a microtheory that described a story, and have the chatbot output a novel. Given a large enough corpus, you could train on particular authors' styles of writing. I would of course market this software, make millions of dollars, and take over the world.Anyway, what I'm looking for is ideas as to where to look for parsable data. I'd need structured content, like news articles, books, and so on. The only hardcoding I'm going to do is for things like predictive spellchecking for unknown words, and dealing with broad classes of inputs and responses. I'm hoping that such a system would be able to handle specific inputs and outputs dynamically, and easily pass the Turing test.I'm also considering IRC logs, chatroom logs, and other "conversation" corpora, but those present problems such as slang, deliberate misspellings, horrible grammar, and extreme ambiguity. I think I should leap one hurdle at a time... so the first is a consistent, pre-edited, dry corpus.

Stealing leaves

By Biswajyoti Das
GUWAHATI, India (Reuters) - Thieves are breaking into tea gardens in India's northeast and plucking leaves, damaging tea bushes and hurting the industry, planters said.
The thieves are believed to be villagers in the tea-growing regions of Assam, famed for its strong malty brew, some of whom struggle to produce saleable tea in their small backyard tea gardens created as part of an employment scheme a decade ago.
"These thieves are now so desperate, they come with bows and arrows, and homemade firearms," said Rupesh Gowala, who leads a tea workers association.
"They clash with our workers whenever they are stopped from stealing. Two of our workers were also killed by them recently."
In Assam's Tinsukia district alone, police say around 500 tea garden burglaries have been reported this year. About 50 were reported in 2007.
A tea worker is trained to only pluck the top two leaves and a bud as the best way of ensuring a steady supply of fresh leaves. The thieves are not so restrained.
They grab leaves haphazardly, leaving swathes of tea bushes out of action for months on end, said Raj Barooah, a leading planter. The stolen leaves reach tea factories damaged and stale, garden owners say.
The increase in low-quality leaves on the market has damaged the

Friday, April 25, 2008

Veddy Intelesting

This has been such an eye opening process. I am already into the Internal Medicine
Blogs of Case Western and UT medical.

There is a potential for sharing information throughout the world with different languages.

Here are the addresess for those of you who are interested

www.clicalcases.blogspot.com ( for Case Western University School of Medicine)
and http://digutmb.blogspot.com