Works for other classes

KO Final Project
Essentially, for Greenberg’s class, I’ll be examining the production of assertoric knowledge in the fields of academia, economics, and law (or possibly medicine.  three fields in total).  I’ll examine each to see how humans make and justify statements, and how others then evaluate and react to these statements.  I’ll aim to arrive at a Minimal Common Use Case from this work suitable for development (i.e. following UML standards).  This MCUC may be developed in time to meaningfully use it in the NLP project.
NLP Final Project
For Haas’ class, I’d like to complete a research proposal to develop a Requirements Specification for a data model that will natively handle assertoric statements in a user-centric way.  The basic research question is, “How can we represent assertoric knowledge in a structured, reliable, machine-parseable way?”
The Need
Assertions are a large part of human language, but are difficult to describe in a formal way.   Assertions are statements that assign an element of truth to a statement.  And beyond the complexity of assertions, there are myriad ways humans respond to encountered assertions:
  • Doubt the assertion
  • Hold it certain
  • Accept it provisionally
  • Distrust the asserter
  • Evaluate how the assertion might be tested
  • Inquire what else the asserter has asserted
  • Inquire what others have asserted about the same topic.
  • and so on
These reactions themselves are often expressed as assertions.
For instance, let’s say a stock analyst predicts that GDP will grow 1.2% this quarter (this is an easy case since verification is well-defined).  We’d plausibly want to know what the consensus estimate was, and what this asserter’s track record with this type of estimates is.  We’d also want to make sure all agree at what’s being measured, the level of statistical or human error involved, etc.  Most of these things are ignored for simplicity’s sake, but occasionally an analyst will insinuate, for instance, that jobs numbers were inflated for political gain.
A secondary need related to assertion parsing is a categorization of the uncertainties and validation methods attendant to an assertion’s components.  For instance, the statements “It’s sunny outside today” and “it was sunny outside yesterday” have insidiously different attendant uncertainties and methods of validation.
Current Systems
Current systems may allow these types of functionalities in a round-about way.  But, as the verification criteria for assertions get more vague (e.g. “George W. Bush’s childhood in Texas had a large impact upon his presidency…”), the native support for this linguistic act diminishes.
A functioning system would allow for enhanced machine processing of natural language by allowing a parsing of text more closely aligning with the way language is actually used.  It will, of course, offer manual markup capabilities far before this.  In this way, it can be viewed as an extension of markup techniques like part-of-speech tagging: initially, tagging will be manual, but the existence of the framework will enable machine learning and, eventually, processing of assertions.
I’ve developed a prototype framework (attached) for assertoric content.  This framework, combined with a taxonomy of uncertainties and methods of validations, will be central to an eventual model.  Additionally, the model will need an inherently perspectival architecture. That is, the beliefs formed upon encountering an assertion will need to be relative to a specific agent.  This will replicate the perspectival nature of human experience, allow the modeling of the assertions and beliefs described above, and allow for machines to operate as assertoric agents.
Broad Impact
The ultimate realization of this technology combined with advances in AI would allow systems to believe, disbelieve, and conditionally accept assertive statements by others, due to its support for perspectival representations of facts.  Breazeal and others have talked about the benefits of social-emotional intelligence in robotic systems; endowing such systems with assertoric agency is another highly important step in the anthropomorphization of computational systems.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: