Source: site.view [edit]
Function name: references
Arguments:
Description: Home page for sataire.com
Page type: html
Render function:  
Module: sataire

Page source:

<webl>     WubCall("sataire.incHeader", []);      </webl>
<webl>     WubCall("sataire.incTop", []);         </webl>



<!-- content here 
============================================================ -->
				<div id="header">
					<!-- img src="/images/sataire.jpg" width="332" height="453" alt="Sataire" id="person" / -->
					<h1><img src="/images/logo.gif" width="259" height="107" alt="SATAIRE" /></h1>
					<div id="wide">		
						
					</div>
					
<webl>     WubCall("sataire.incNav", []);         </webl>

				<div id="tray">


<p id="heading">References</p>				
				

<a href="http://www.kurzweilai.net/can-a-robot-pass-the-university-of-tokyo-math-entrance-exam">Can a robot pass the university of tokyo math entrance exam?</a>
		
<table>
<tr><td><a href="http://www.research.ibm.com/deepqa/" target="_blank">IBM Watson (Deep QA)</a>
</td>
<td>The Watson system by IBM is taking on human Jeopardy champions.  It is a self-contained (no access to the internet) QA system that uses a method to combine results from numerous probabilistic "experts" which look for solutions to the questions in different ways. </td>
<td>[1] <a href="http://domino.watson.ibm.com/library/CyberDig.nsf/1e4115aea78b6e7c85256b360066f0d4/d12791eaa13bb952852575a1004a055c?OpenDocument&Highlight=0,rc24789" target="_blank">Towards the Open Advancement of Question Answering Systems</a>. 2009.</td>
</tr>

<tr><td><a href="http://www.projecthalo.com/" target="_blank">HALO</a>
</td>
<td>Project Halo, is a project funded by Paul Allen's Vulcan Inc.  Pilot/Phase I:Three companies built systems capable of answering Advanced Placement Test level questions in sub-areas of chemistry. However, the effort cost $10,000 per page.  Phase II: develop tools to lower the price. </td>
<td>Papers </td>
</tr>

<tr><td><a href="" target="_blank">CALO</a>
</td>
<td>Cognitive Assistant that Learns and Organizes.  DARPA-funded project. Evaluation was an aptitude-like test in an Administration domain that tested learning in the wild.  Questions were coded in a special logic language to make it understandable by the computer.</td>
<td>Papers </td>
</tr>


<tr><td><a href="" target="_blank">ACQUAINT</a>
</td>
<td>DARPA-funded project.</td>
<td>Papers </td>
</tr>


<tr><td><a href="" target="_blank">TREC</a>
</td>
<td>DARPA-funded project.</td>
<td>Papers </td>
</tr>


<tr><td><a href="" target="_blank">NTCIR</a>
</td>
<td>DARPA-funded project.</td>
<td>Papers </td>
</tr>


<tr><td><a href="http://googletranslate.blogspot.com/2010/10/poetic-machine-translation.html" target="_blank">Google Poetry</a>
</td>
<td>Google adapts their translation algorithm to generate poetry forms (rhyme, meter)</td>
<td><a target="_blank" href="http://blogs.wsj.com/digits/2010/11/02/google-translate-takes-on-poetry/">WSJ</a> </td>
</tr>



<tr><td><a href="http://www.numenta.com/" target="_blank">Numenta</a>
</td>
<td>Neural architecture based on the neo-cortex.  Could be relevant for an analogy handling point of view.</td>
<td>Papers </td>
</tr>

<tr><td><a href="" target="_blank">Analogy</a>
</td>
<td>Hofstadter, D. (2001). Analogy as the Core of Cognition, in Dedre Gentner, Keith Holyoak, and Boicho Kokinov (eds.) The Analogical Mind: Perspectives from Cognitive Science, Cambridge, MA: The MIT Press/Bradford Book, 2001, pp. 499–538.</td>
<td>Papers </td>
</tr>
</table>
				
<p id="heading"><a href="http://sataire.com/site/referencesQA">QA References</a></p>	
				</div>

<!-- END CONTENT
============================================================ -->

<webl>     WubCall("sataire.incFooter", []);      </webl>