<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xml:base="http://www.softalkapple.com"  xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
 <title>Jim Salmons&#039;s blog</title>
 <link>http://www.softalkapple.com/blogs/jim-salmons</link>
 <description></description>
 <language>en</language>
<item>
 <title>Help Send Jim to Museums and the Web 2014 Conference</title>
 <link>http://www.softalkapple.com/blogs/help-send-jim-museums-and-web-2014-conference</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;image-right&quot;&gt;&lt;a target=&quot;_top&quot; style=&quot;border: 0 none;&quot; href=&quot;http://www.gofundme.com/sendsoftalk2mw2014?utm_medium=wdgt&quot; title=&quot;Visit this page now.&quot;&gt;&lt;img style=&quot;border: 0 none;&quot; src=&quot;http://funds.gofundme.com/css/3.0_donate/green/widget.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;We&#039;re using the amazing &lt;strong&gt;GoFundMe.com&lt;/strong&gt; platform to solicit donations to support sending STAP Research Director Jim Salmons (that&#039;d be me) to the &lt;a href=&quot;http://mw2014.museumsandtheweb.com/&quot;&gt;Museums and the Web 2014&lt;/a&gt; conference (MW2014), next month in Baltimore. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Museums and the Web&lt;/strong&gt; is the premiere conference where hundreds of museum and archive professionals from around the world gather to exchange ideas, train newcomers into the field (that&#039;d be us), and network to establish collaborations that will empower their research and visitor agendas throughout the year.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This conference is the BEST place to establish valuable collaborative relationships to support The Softalk Apple Project and the FactMiners social-game platform.&lt;/strong&gt;&lt;/p&gt;
&lt;!--break--&gt;&lt;p&gt;
My projected budget is based on a full week in Baltimore, travel, meals, and of course, the fee for the conference and an important workshop introduction to OpenData in museum applications. Here&#039;s how the $3,520 budget breaks down:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;MW2014 Full Conference registration: $700
  &lt;/li&gt;
&lt;li&gt;Intro to Museum OpenData workshop: $175
  &lt;/li&gt;
&lt;li&gt;Airfare Iowa-B&#039;more plus bag check: $343
  &lt;/li&gt;
&lt;li&gt;Hotel w/ taxes: $1,393
  &lt;/li&gt;
&lt;li&gt;Meals and incidentals: $455
  &lt;/li&gt;
&lt;li&gt;Softalk/FactMiners Meetup hosting: $200
  &lt;/li&gt;
&lt;li&gt;GoFundMe fee (approx): $155
  &lt;/li&gt;
&lt;li&gt;Payment processor fee (approx): $100
  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;TOTAL: $3,520&lt;/strong&gt;
  &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;As a donor/backer of this campaign you will be helping our grassroots project to achieve our goal of honoring the unique impact that Softalk magazine had on the lives of its creators and readers. And your support will be helping us to &#039;pay it forward&#039; by spawning the FactMiners social-gaming community as a gameplaying crowdsource resource available to all museums and archives as we race into the 21st Century.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Thu, 06 Mar 2014 00:24:46 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">105 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/help-send-jim-museums-and-web-2014-conference#comments</comments>
</item>
<item>
 <title>Softalk Magazine FactMiners Fact Cloud to be CIDOC-CRM Compliant</title>
 <link>http://www.softalkapple.com/blogs/softalk-magazine-factminers-fact-cloud-be-cidoc-crm-compliant</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/CRM_header_L.gif&quot; width=&quot;178&quot; height=&quot;74&quot; alt=&quot;CRM_header_L.gif&quot; /&gt;&lt;/div&gt;
&lt;p&gt;The &lt;strong&gt;Softalk Apple Project&lt;/strong&gt; is pleased to announce adoption of the &lt;a href=&quot;http://www.cidoc-crm.org/comprehensive_intro.html&quot;&gt;Conceptual Reference Model (CRM)&lt;/a&gt; of the &lt;a href=&quot;http://network.icom.museum/cidoc/&quot;&gt;International Committee for Documentation (CIDOC)&lt;/a&gt; of the &lt;a href=&quot;http://icom.museum/&quot;&gt;International Council of Museums (ICOM)&lt;/a&gt;. The CIDOC-CRM is an &lt;em&gt;&#039;ontology&#039;&lt;/em&gt; for cultural heritage information, in other words it describes in a &lt;strong&gt;formal language&lt;/strong&gt; the &lt;strong&gt;concepts&lt;/strong&gt; and &lt;strong&gt;relations&lt;/strong&gt; relevant to the &lt;strong&gt;documentation of cultural heritage&lt;/strong&gt;. This ISO standard will be used as the reference model for the development of the &lt;strong&gt;FactMiners Fact Cloud&lt;/strong&gt; metamodel that will logically organize the &lt;em&gt;&#039;facts&#039;&lt;/em&gt; to be &lt;em&gt;&#039;mined&#039;&lt;/em&gt; out of the digital archive of &lt;strong&gt;Softalk magazine&lt;/strong&gt;, respecting both the elements of its &lt;em&gt;editorial content&lt;/em&gt; and the &lt;em&gt;complex document structure&lt;/em&gt; of a magazine.&lt;/p&gt;
&lt;p&gt;It is truly exciting to be able to &#039;stand on the shoulders of giants&#039; with respect to formalizing the candidate elements and overall logical organization of the facts that we will be capturing in the FactMiners Fact Cloud describing the content of Softalk magazine. This reference standard will dramatically accelerate our design and implementation of the FactMiners Fact Cloud Wizard component within the core FactMiners Open Source development platform.&lt;/p&gt;
&lt;p&gt;A relatively small group of dedicated data scientists and museum informatics professionals have spent nearly twenty years working out an essential set of elements and their logical organization. This means we can get right to work developing our FactMiners Fact Cloud companion of the Softalk archive as a domain-specific extension of the CIDOC-CRM reference model. Doing this, we guarantee Semantic Web and OpenData accessibility which is essential to our project mission.&lt;/p&gt;
&lt;p&gt;Our growing collaboration with the &lt;a href=&quot;http://www.Structr.org&quot;&gt;www.Structr.org&lt;/a&gt; team will benefit greatly from this reference model as we work together to envision and build the FactMiners platform on the &lt;a href=&quot;http://www.neo4j.org&quot;&gt;Neo4j&lt;/a&gt;-powered &lt;strong&gt;Structr CMS/web-services platform&lt;/strong&gt;. Our first efforts will focus on the FactMiners Fact Cloud Wizard described in &lt;a href=&quot;http://gist.neo4j.org/?7817558&quot;&gt;Part 2 of this Neo4j GraphGist&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;More information will be available soon on the &lt;a href=&quot;http://www.FactMiners.org&quot;&gt;www.FactMiners.org&lt;/a&gt; Open Source developers community website.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 23 Feb 2014 01:32:47 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">101 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/softalk-magazine-factminers-fact-cloud-be-cidoc-crm-compliant#comments</comments>
</item>
<item>
 <title>FactMiners Milestone: Neo4j GraphGist Design Docs On-line</title>
 <link>http://www.softalkapple.com/blogs/factminers-neo4j-graphgist-design-docs-online</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;I have been very quiet during the year-end holiday and throughout January. That is because I have been busy in the &#039;deep weeds&#039; of moving the FactMiners social-game ecosystem forward. Quick summary... The FactMiners game is the means we will use to create an incredible &#039;Fact Cloud&#039; of all the information in all 48 issues of Softalk magazine. It is an ambitious mission, but one that is guaranteed to be &quot;serious fun&quot;– especially when we create a crowdsource game technology and community (that&#039;s the ecosystem scope of this).&lt;/p&gt;
&lt;p&gt;I have posted two of four parts as an entry in the &lt;a href=&quot;https://github.com/neo4j-contrib/graphgist/wiki#graphgist-challenge-submissions&quot;&gt;Neo4j GraphGist Winter Challenge&lt;/a&gt;. This multi-part GraphGist presents the &lt;em&gt;&quot;embedded metamodel subgraph&quot; design pattern&lt;/em&gt; underlying the FactMiners ecosystem:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;http://gist.neo4j.org/?8640853&quot;&gt;Part 1 explains the metamodel subgraph design pattern.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://gist.neo4j.org/?7817558&quot;&gt;Part 2 demonstrates the pattern by starting to metamodel the Fact Cloud for the Softalk Magazine archive.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;What&#039;s up next? Getting the &lt;a href=&quot;http://www.FactMiners.org&quot;&gt;www.FactMiners.org&lt;/a&gt; Developers Community website up so I can move the &#039;deep weeds&#039; stuff about FactMiners to its intended and future home.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 02 Feb 2014 23:02:37 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">98 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-neo4j-graphgist-design-docs-online#comments</comments>
</item>
<item>
 <title>FactMiners: Scientists Say It&#039;s a Great Idea!</title>
 <link>http://www.softalkapple.com/blogs/factminers-scientists-say-its-great-idea</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/zoran-popovic-011.png&quot; width=&quot;470&quot; height=&quot;286&quot; alt=&quot;Zoran Popovic, director of the Centre for Game Science at the University of Washington, is the co-creator of Foldit. Photograph: Michael Clinard&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Okay, the headline&#039;s unnamed scientists did not specifically say that the idea for the FactMiners social-game ecosystem we&#039;re developing as part of The Softalk Apple Project is a great idea. What they are saying is that &lt;strong&gt;game-powered crowdsourcing methods are a tremendous resource for doing real and important science research&lt;/strong&gt;. &lt;/p&gt;
&lt;p&gt;In today&#039;s world where pure science and many domains of research are financially challenged, getting gamers to have &quot;serious fun&quot; helping with underfunded research activity is a win-win for sure. But beyond creative financine, many scientists are also finding that social games with a &quot;serious fun&quot; side can be a great way to engage the public; a great way to have science be something &#039;we&#039; do rather than something &#039;scientists&#039; do &#039;over there&#039; (and without &#039;us&#039;). More win-win.&lt;/p&gt;
&lt;p&gt;You don&#039;t need me to fill you in further, simply check out this exciting article at &lt;strong&gt;The Guardian and Observer&lt;/strong&gt; website, &lt;em&gt;&lt;a href=&quot;http://www.theguardian.com/technology/2014/jan/25/online-gamers-solving-sciences-biggest-problems&quot;&gt;&#039;How online gamers are solving science&#039;s biggest problems&#039;&lt;/a&gt;&lt;/em&gt;. The column&#039;s author, Dara Mohammadi, has thoughtfully provided an excellent overview of this exciting gaming trend and then profiled ten examples with links to on-line games where you can help do serious scientific research by playing games.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/games-galaxy-001.png&quot; width=&quot;470&quot; height=&quot;315&quot; alt=&quot;games-galaxy-001.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;This article gives me the proverbial goosebumps. It affirms my personal belief about the potential for the FactMiners social-game ecosystem to be my &quot;pay it forward&quot; tribute in honor and recognition of the importance of Softalk Magazine. This article – and especially the games and associated projects to which it links – provide context for what we&#039;re doing here to create the first FactMiners Fact Cloud as a companion to the on-line digital archive of Softalk Magazine. It is also good context to justify the excitement I feel about the ideas captured in the thread of blog posts looking at the potential to create FactMiners game plug-ins to build a Fact Cloud for the million-plus Public Domain Image Collection of the British Library.&lt;/p&gt;
&lt;p&gt;If FactMiners sounds like it might be an interesting idea to you, by all means check out this article. In the meantime, I have to get back to writing an entry for the Neo4j&#039;s January GraphGist Challenge. I am writing a piece to explore the embedded metamodel subgraph design pattern used for &quot;self-descriptive&quot; Fact Clouds that are part of the FactMiners social-game ecosystem.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 26 Jan 2014 20:42:01 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">97 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-scientists-say-its-great-idea#comments</comments>
</item>
<item>
 <title>FactMiners: The Pursuit of Serious Fun with Images and Robots</title>
 <link>http://www.softalkapple.com/blogs/factminers-pursuit-serious-fun-images-and-robots</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;inline_toc&quot;&gt;&lt;em&gt;Posts in this series...&lt;/em&gt;&lt;br /&gt;&lt;div class=&quot;view view-factminers-britlib view-id-factminers_britlib view-display-id-default view-dom-id-a0c73c4830f90c8e3882a476cd230d7a&quot;&gt;
        
  
  
      &lt;div class=&quot;view-content&quot;&gt;
      &lt;div class=&quot;item-list&quot;&gt;    &lt;ul&gt;          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-fact-cloud-british-library-image-collection&quot;&gt;A FactMiners&amp;#039; Fact Cloud for the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;FactMiners: Introducing the &amp;#039;Seeing Eye Child&amp;#039; Robot Adoption Agency&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-finding-cv-stem-british-library-image-collection&quot;&gt;FactMiners: Finding the &amp;#039;CV&amp;#039; in &amp;#039;STEM&amp;#039; at the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-quick-trip-stanford-vision-lab&quot;&gt;FactMiners: A Quick Trip to the Stanford Vision Lab&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-pursuit-serious-fun-images-and-robots&quot;&gt;FactMiners: The Pursuit of Serious Fun with Images and Robots&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
      &lt;/ul&gt;&lt;/div&gt;    &lt;/div&gt;
  
  
  
  
  
  
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;em&gt;Can kids and parents – students and tutors – have fun learning and teaching together AND create something that will contribute to advancing the state-of-the-art of computer vision and artificial intelligence research?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In 2009 when Dr. Li and kindred computer vision and artificial intelligence (CV/AI) researchers looked to the Internet for real world data to test their machine-learning programs doing full scene image recognition, user-tagged collections of images on sites like Flickr were about as rich a learning resource as could be found. Dealing with &quot;dirty&quot;/irrelevant tags in such image collections is a non-trivial challenge for these CV researchers. And it is certainly reasonable for these researchers&#039; study designs to have assumed scarcity of (assumed-expensive) human resources for both materials prep and interactive tutoring/training of machine-learning programs.&lt;/p&gt;
&lt;p&gt;By 2015 the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; plans to have both:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;a &lt;strong&gt;semantically-rich Fact Cloud&lt;/strong&gt; for a non-trivial subset of the &lt;strong&gt;British Library Image Collection&lt;/strong&gt;, AND&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;game-energized, crowdsource-powered human-tutor resource&lt;/strong&gt; freely available as a &lt;strong&gt;CV/AI machine-learning program training resource&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FUTURAMA-Season-6B-Benderama_grand_opening.png&quot; width=&quot;520&quot; height=&quot;290&quot; alt=&quot;FUTURAMA-Season-6B-Benderama_grand_opening.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Through initial counsel and collaboration with active CV/AI researchers, we will refine and extend our game design and community dynamics to transition the &#039;Seeing Eye Child&#039; Robot Adoption Agency into its mature and sustainable state. &lt;/p&gt;
&lt;p&gt;At sustainable maturity, the Robot Adoption Agency gaming community will attract programming learner-players who will use the game&#039;s &quot;sandbox&quot; resource and community to develop and extend their CV/AI skills and interests. Some proportion of those programmer-players will develop a deep interest in human/computer interaction and contribute to the Robot Adoption Agency&#039;s gaming community by creating such components as new Open Source training/tutor workflow plug-ins. Those with interests driven more by game design and development will likely contribute presentation/interaction plug-ins to add fun and engaging robot character generators for our programmer-players&#039; otherwise unseen running-in-memory agent programs. When we get to this level of community self-support, the game will be in its own good hands.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_ecosystem.png&quot; width=&quot;500&quot; height=&quot;423&quot; alt=&quot;FactMiners ecosystem&quot; /&gt;&lt;/div&gt;
&lt;p&gt;So, what&#039;s next? Will the FactMiners ecosystem ever be more than just an interesting idea that remains untried? I, for one, don&#039;t intend to let that happen. Step by step, we&#039;re moving forward. In the &lt;a href=&quot;/blogs/factminers-more-or-less-folksonomy&quot;&gt;&lt;em&gt;&quot;FactMiners: More or Less Folksonomy?&quot;&lt;/em&gt;&lt;/a&gt; article, we have reached out and begun collaborations with museum informatics professionals, both for the counsel of their domain expertise and to find kindred spirits interested in hosting FactMiners Fact Cloud companions for their on-line digital collections. In this article, we&#039;ve described how the FactMiners ecosystem and its Fact Cloud architecture can accommodate image-based digital collections in addition to the print/text realm of complex magazine document structure of our project focus at The Softalk Apple Project. &lt;/p&gt;
&lt;p&gt;In exploring this new use case within digital image collections for the FactMiners ecosystem, we have identified how our game design can &quot;play&quot; into the domains of computer vision (CV) and artificial intelligence (AI). So among our next steps along the path of bringing the FactMiners ecosystem to life will be to find some kindred spirits in the CV/AI domain interested in exploring just how fun (and useful) it would be to have a British Library Image Collection Fact Cloud companion and &#039;Seeing Eye Child&#039; robot-tutor web service.&lt;/p&gt;
&lt;p&gt;I believe if we can bring the active interest of a CV/AI collaborator to the table as we discuss this idea further with the good folks at the British Library Labs, we&#039;ll be a BIG step closer to opening the Internet&#039;s first &#039;Seeing Eye Child&#039; Robot Adoption Agency courtesy of the collective efforts of the FactMiners developer community, the British Library Labs, and some as-yet-unidentified CV/AI researchers. Stay tuned...&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Fri, 10 Jan 2014 22:24:39 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">95 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-pursuit-serious-fun-images-and-robots#comments</comments>
</item>
<item>
 <title>FactMiners: A Quick Trip to the Stanford Vision Lab</title>
 <link>http://www.softalkapple.com/blogs/factminers-quick-trip-stanford-vision-lab</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;inline_toc&quot;&gt;&lt;em&gt;Posts in this series...&lt;/em&gt;&lt;br /&gt;&lt;div class=&quot;view view-factminers-britlib view-id-factminers_britlib view-display-id-default view-dom-id-cd0b4a3eeca030f99d9ace5ef6d4bb6a&quot;&gt;
        
  
  
      &lt;div class=&quot;view-content&quot;&gt;
      &lt;div class=&quot;item-list&quot;&gt;    &lt;ul&gt;          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-fact-cloud-british-library-image-collection&quot;&gt;A FactMiners&amp;#039; Fact Cloud for the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;FactMiners: Introducing the &amp;#039;Seeing Eye Child&amp;#039; Robot Adoption Agency&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-finding-cv-stem-british-library-image-collection&quot;&gt;FactMiners: Finding the &amp;#039;CV&amp;#039; in &amp;#039;STEM&amp;#039; at the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-quick-trip-stanford-vision-lab&quot;&gt;FactMiners: A Quick Trip to the Stanford Vision Lab&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-pursuit-serious-fun-images-and-robots&quot;&gt;FactMiners: The Pursuit of Serious Fun with Images and Robots&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
      &lt;/ul&gt;&lt;/div&gt;    &lt;/div&gt;
  
  
  
  
  
  
&lt;/div&gt;&lt;/div&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/fei-fei-li-visionlab_logo.png&quot; width=&quot;281&quot; height=&quot;192&quot; alt=&quot;fei-fei-li-visionlab_logo.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;&lt;a href=&quot;http://ai.stanford.edu/site/research/#li&quot;&gt;Fei-Fei Li&lt;/a&gt; is the director of the Computer and Human &lt;a href=&quot;http://vision.stanford.edu/&quot;&gt;Vision Lab&lt;/a&gt; within the legendary &lt;a href=&quot;http://ai.stanford.edu/&quot;&gt;Stanford Artificial Intelligence Laboratory&lt;/a&gt;. While her research interests and breakthrough contributions to the field are wide-ranging, I want to focus briefly on a 2009 study she and her colleagues did at Princeton, before Dr. Li&#039;s selection to head the prestigious Stanford Vision Lab. &lt;em&gt;&lt;a href=&quot;http://vision.stanford.edu/projects/totalscene/index.html&quot;&gt;&quot;Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework&quot;&lt;/a&gt;&lt;/em&gt; is a remarkable project and representative of the kind of machine-learning image recognition capabilities envisioned at &quot;serious play&quot; in the &#039;Seeing Eye Child&#039; Robot Adoption Agency game. &lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/stanford_cvlab_totalscene_coherent_model.png&quot; width=&quot;400&quot; height=&quot;254&quot; alt=&quot;stanford_cvlab_totalscene_coherent_model.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Researchers like Dr. Li are creating machine-learning programs that can effectively &#039;look&#039; at a previously unseen image and not just find objects within the image but interpret the full scene. This example graphic from the project&#039;s summary shows how a total scene is considered as a comprehensive model that incorporates both top-down context, e.g. a polo match, along with both visual and textual (tag) elements. For Dr. Li and associates&#039; study, the tags and images were drawn from Flickr – yes, the same popular image-sharing site where the British Library released its 1-million-plus public domain image collection – and the end-to-end automatic machine-learning/scene-recognition process is generally described as shown in the following 3-step workflow diagram from the study.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/stanford_cvlab_totalscene_automatic_framework_system_flow.png&quot; width=&quot;500&quot; height=&quot;241&quot; alt=&quot;stanford_cvlab_totalscene_automatic_framework_system_flow.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;The results of this machine-learning strategy are both remarkable and encouraging for our game design requirements. Given a &#039;seed&#039; set of curated images with text label &#039;hints&#039; and a representative collection of similar-scene images clustered by the eight targeted sports categories, the researchers&#039; automated process does a remarkable job of finding and labeling these elements in new unseen images, producing results such as this test image which has been correctly recognized as a polo scene with horse and rider, trees, and grass: &lt;/p&gt;
&lt;div class=&quot;image-center&quot;&gt;&lt;img src=&quot;/sites/default/files/images/stanford_cvlab_totalscene_tagged_image_polo.png&quot; width=&quot;544&quot; height=&quot;306&quot; alt=&quot;stanford_cvlab_totalscene_tagged_image_polo.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;While Dr. Li and associate&#039;s strategy is particularly robust and comprehensive, their paper cites 22 additional studies in four broad areas of CV/AI research that tackle the full scene understanding challenge:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;Image understanding using contextual information&lt;/li&gt;
&lt;li&gt;Machine translation between words and images&lt;/li&gt;
&lt;li&gt;Simultaneous object recognition and segmentation&lt;/li&gt;
&lt;li&gt;Learning semantic visual models from Internet data&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;So it is safe to say that the domains of computer vision and artificial intelligence (CV/AI) are at a sufficient stage of capability and active research that the kind of &#039;robot&#039; vision required for our game design is both doable and getting better. If you have any doubts (or better yet, interest to know more), I encourage you to read &lt;a href=&quot;http://vision.stanford.edu/projects/totalscene/index.html&quot;&gt;Dr. Li&#039;s project overview&lt;/a&gt;, or better yet, the &lt;a href=&quot;http://vision.stanford.edu/documents/LiSocherFei-Fei_CVPR2009.pdf&quot;&gt;full study PDF&lt;/a&gt; and follow links to cited related research.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;/blogs/factminers-pursuit-serious-fun-images-and-robots&quot;&gt;concluding post of this series&lt;/a&gt; about the potential for FactMiners to contribute to the &quot;serious fun&quot; at the British Library Image Collection, I&#039;ll set some goals and chart a course forward to add the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; to the selection of social-learning games to be developed by, and available to, the FactMiners gaming community.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 01 Jan 2014 22:11:31 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">94 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-quick-trip-stanford-vision-lab#comments</comments>
</item>
<item>
 <title>FactMiners: Finding the &#039;CV&#039; in &#039;STEM&#039; at the British Library Image Collection</title>
 <link>http://www.softalkapple.com/blogs/factminers-finding-cv-stem-british-library-image-collection</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;inline_toc&quot;&gt;&lt;em&gt;Posts in this series...&lt;/em&gt;&lt;br /&gt;&lt;div class=&quot;view view-factminers-britlib view-id-factminers_britlib view-display-id-default view-dom-id-ab2feed29565d9dfc4b62fc079b370b9&quot;&gt;
        
  
  
      &lt;div class=&quot;view-content&quot;&gt;
      &lt;div class=&quot;item-list&quot;&gt;    &lt;ul&gt;          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-fact-cloud-british-library-image-collection&quot;&gt;A FactMiners&amp;#039; Fact Cloud for the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;FactMiners: Introducing the &amp;#039;Seeing Eye Child&amp;#039; Robot Adoption Agency&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-finding-cv-stem-british-library-image-collection&quot;&gt;FactMiners: Finding the &amp;#039;CV&amp;#039; in &amp;#039;STEM&amp;#039; at the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-quick-trip-stanford-vision-lab&quot;&gt;FactMiners: A Quick Trip to the Stanford Vision Lab&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-pursuit-serious-fun-images-and-robots&quot;&gt;FactMiners: The Pursuit of Serious Fun with Images and Robots&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
      &lt;/ul&gt;&lt;/div&gt;    &lt;/div&gt;
  
  
  
  
  
  
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Computer Vision&lt;/strong&gt;, or using its popular acronym &#039;&lt;strong&gt;CV&lt;/strong&gt;&#039;, is a domain of scientific knowledge and practice within the domain of Artificial Intelligence which is part of the broad domain of Computer Science. CV is a particularly challenging field that has strong connections into each of the Science, Technology, Engineering, and Math &#039;branches&#039; of the &lt;a href=&quot;http://en.wikipedia.org/wiki/STEM_fields&quot;&gt;STEM fields&lt;/a&gt; of education.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/auto-plant-robots.png&quot; width=&quot;415&quot; height=&quot;307&quot; alt=&quot;auto-plant-robots.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;You only have to glance at the image here of a modern auto assembly line and compare it to one of our not-so-distant mid-20th century lines, or watch the workerbot demo video to know this: &lt;em&gt;What is currently an exciting bleeding edge of CV and AI research and industrial practice today will be mainstream vital skill areas for both entrepreneurial and employment opportunities in the near and foreseeable future. &lt;/em&gt;&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;
&lt;iframe width=&quot;400&quot; height=&quot;225&quot; src=&quot;//www.youtube.com/embed/UJMHO29FRbA?rel=0&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;And how will this growing cohort of budding job creators and job fillers gain their skills in Computer Vision programming?&lt;/strong&gt; If they get to play the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; game during their formative years, a growing number of them could develop strong CV and AI programming skills and interests by writing FactMiners &#039;Robot players&#039; to be effectively &#039;put up&#039; for adoption in the game&#039;s Adoption Agency. Role-wise, these &#039;robot&#039; player/programs are the &#039;real&#039; player&#039;s &lt;a href=&quot;http://en.wikipedia.org/wiki/Agent-based_model&quot;&gt;agent-actor programs&lt;/a&gt;. To borrow the &lt;a href=&quot;http://sohodojo.com/newsletters/rnr_newsletter_05.html#topic3&quot;&gt;anthropomorphic imagery&lt;/a&gt; of &lt;a href=&quot;http://www.imdb.com/title/tt0084827/&quot;&gt;Disney&#039;s 1982 scifi classic, TRON&lt;/a&gt;, our young programming &quot;Flynn&quot; user/players will be sending their &quot;Clu&quot; agent/programs into the Robot Adoption Agency to begin a cyber-learning journey into the British Library Digital Image Collection.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;a href=&quot;http://www.imdb.com/title/tt0084827/&quot;&gt;&lt;img src=&quot;/sites/default/files/images/Tron_1982.jpg&quot; width=&quot;214&quot; height=&quot;314&quot; alt=&quot;Tron_1982.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;From an &#039;ends-testing&#039; perspective our concern for the game&#039;s Adoption Agency robot supply can more creatively be seen, not as a &lt;em&gt;robot&lt;/em&gt; (i.e. agent-program) supply &lt;strong&gt;problem&lt;/strong&gt;, but rather as a &lt;em&gt;robot-programmer&lt;/em&gt; supply &lt;strong&gt;opportunity&lt;/strong&gt;. Our game&#039;s desirable side-effect is to act as part of its own positive feedback loop whereby the demand for more and better CV/AI programs in the emerging &lt;a href=&quot;http://en.wikipedia.org/wiki/Internet_of_things&quot;&gt;Internet of Things&lt;/a&gt; will generate demand for more and better robot programmers.&lt;/p&gt;
&lt;p&gt;Having both means- and ends-tested the motivation and justification for this game design idea, a fundamental question remains... &lt;strong&gt;Is machine-learning image recognition like what is assumed to be available in the proposed Robot Adoption Agency game even possible?&lt;/strong&gt; And if possible, &lt;strong&gt;is there a place for human-mediated vision training (the &#039;Seeing Eye Child&#039; player&#039;s role) as envisioned in the game?&lt;/strong&gt; To consider these important questions, let&#039;s take a &lt;a href=&quot;/blogs/factminers-quick-trip-stanford-vision-lab&quot;&gt;quick trip to the legendary Stanford AI Lab&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 29 Dec 2013 01:33:12 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">92 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-finding-cv-stem-british-library-image-collection#comments</comments>
</item>
<item>
 <title>FactMiners: Introducing the &#039;Seeing Eye Child&#039; Robot Adoption Agency</title>
 <link>http://www.softalkapple.com/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;inline_toc&quot;&gt;&lt;em&gt;Posts in this series...&lt;/em&gt;&lt;br /&gt;&lt;div class=&quot;view view-factminers-britlib view-id-factminers_britlib view-display-id-default view-dom-id-aabca69d3a9333c99a44d80fbc4bac32&quot;&gt;
        
  
  
      &lt;div class=&quot;view-content&quot;&gt;
      &lt;div class=&quot;item-list&quot;&gt;    &lt;ul&gt;          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-fact-cloud-british-library-image-collection&quot;&gt;A FactMiners&amp;#039; Fact Cloud for the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;FactMiners: Introducing the &amp;#039;Seeing Eye Child&amp;#039; Robot Adoption Agency&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-finding-cv-stem-british-library-image-collection&quot;&gt;FactMiners: Finding the &amp;#039;CV&amp;#039; in &amp;#039;STEM&amp;#039; at the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-quick-trip-stanford-vision-lab&quot;&gt;FactMiners: A Quick Trip to the Stanford Vision Lab&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-pursuit-serious-fun-images-and-robots&quot;&gt;FactMiners: The Pursuit of Serious Fun with Images and Robots&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
      &lt;/ul&gt;&lt;/div&gt;    &lt;/div&gt;
  
  
  
  
  
  
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;In the &lt;a href=&quot;/blogs/factminers-fact-cloud-british-library-image-collection&quot;&gt;first part of this informal proposal to creatively tap the newly-published British Library Image Collection&lt;/a&gt;, I imagined a plug-in game to be developed as part of the &lt;a href=&quot;/blogs/factminers-more-or-less-folksonomy&quot;&gt;&lt;strong&gt;FactMiners&lt;/strong&gt; social-game ecosystem&lt;/a&gt;. In this adult/child-interactive early-learning app, gameplayers collectively contribute to building a &lt;strong&gt;Fact Cloud&lt;/strong&gt; of &lt;em&gt;&quot;What&#039;s in this picture&quot;&lt;/em&gt; facts (elementary sentence-like assertions stored in a graph database) for the over one-million images recently uploaded to the &lt;a href=&quot;http://www.flickr.com/photos/britishlibrary&quot;&gt;Flikr Commons&lt;/a&gt;. Parents playing this new word/picture FactMiners plug-in game with their kids create the Fact Cloud that becomes a vital resource for a second new FactMiners game; the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt;.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot; width=&quot;520&quot; height=&quot;300&quot; alt=&quot;FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;The Robot Adoption Agency is similar to the &lt;a href=&quot;http://en.wikipedia.org/wiki/Tamagotchi&quot;&gt;&#039;Tamagotchi&#039; or &#039;digital pet&#039;&lt;/a&gt; gaming phenomena that hit in the mid-1990&#039;s and is still going strong. The difference here is that we harness our little cognitive learning machines – AKA FactMiners game players – to &#039;adopt&#039; a robot (AKA a machine-learning program with some form of vision – image intake and analysis – capability) and help it to learn to see and understand its world. As an adoptive &#039;Seeing Eye Child&#039;, players take on the &lt;em&gt;roles&lt;/em&gt; of &lt;strong&gt;coach&lt;/strong&gt; and &lt;strong&gt;referee&lt;/strong&gt; for training sessions where adopted robots learn to see what&#039;s in the British Library images. &lt;/p&gt;
&lt;p&gt;The nimble-thinking among you have likely spotted the weak link in this proposed game design... &lt;strong&gt;robot supply&lt;/strong&gt;. &lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/auto-plant-robots.png&quot; width=&quot;554&quot; height=&quot;409&quot; alt=&quot;auto-plant-robots.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Are we honestly to believe that the latest industrial robots ready to be brought on-line to a Ford or Toyota auto assembly line are in need of vision-training sessions with young kids mentoring their ability to recognize scenes from 17th-19th century book illustrations? Well, that is most certainly not the case. &lt;/p&gt;
&lt;p&gt;So how can the Fact Cloud creators – those playing the word/picture FactMiners game that creates the Fact Cloud descriptive companion to the British Library Image Collection – how can these players be motivated to create that Fact Cloud if its imagined great use in robot vision training turns out not to be a need at all? What kid is going to wait around a Robot Adoption Agency match-making server&#039;s &#039;waiting room&#039; for an adoptable robot that may never show up?&lt;/p&gt;
&lt;p&gt;Fortunately we can consider both the means and the ends of the Fact Cloud creation effort to answer such important questions. From a &#039;means value&#039; perspective, the image-describing FactMiners gameplay that creates the Fact Cloud is a fun, social, interactive learning activity. There is an immediate and personal motivation and value for parents, siblings, tutors, and teachers to help little learners build the British Library Image Collection Fact Cloud. So even if the robot vision training need were to turn out to be an elusive future-imagining, the &#039;serious fun&#039; of building the British Library Image Collection Fact Cloud is time and energy well-spent in direct, interactive childhood and developmental educational activity.&lt;/p&gt;
&lt;p&gt;Having &#039;means-tested&#039; the effort to create the British Library Image Collection Fact Cloud, in my next post I will turn our attention to the &#039;ends&#039; test – &lt;a href=&quot;/blogs/factminers-finding-cv-stem-british-library-image-collection&quot;&gt;Will we really have a robot supply problem at the &#039;Seeing Eye Child&#039; Robot Adoption Agency?&lt;/a&gt;...&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 25 Dec 2013 22:03:17 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">93 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency#comments</comments>
</item>
<item>
 <title>A FactMiners&#039; Fact Cloud for the British Library Image Collection</title>
 <link>http://www.softalkapple.com/blogs/factminers-fact-cloud-british-library-image-collection</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;div class=&quot;inline_toc&quot;&gt;&lt;em&gt;Posts in this series...&lt;/em&gt;&lt;br /&gt;&lt;div class=&quot;view view-factminers-britlib view-id-factminers_britlib view-display-id-default view-dom-id-d071a7e3f10414c60fe4e0a42cf9dc92&quot;&gt;
        
  
  
      &lt;div class=&quot;view-content&quot;&gt;
      &lt;div class=&quot;item-list&quot;&gt;    &lt;ul&gt;          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-fact-cloud-british-library-image-collection&quot;&gt;A FactMiners&amp;#039; Fact Cloud for the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;FactMiners: Introducing the &amp;#039;Seeing Eye Child&amp;#039; Robot Adoption Agency&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-finding-cv-stem-british-library-image-collection&quot;&gt;FactMiners: Finding the &amp;#039;CV&amp;#039; in &amp;#039;STEM&amp;#039; at the British Library Image Collection&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-quick-trip-stanford-vision-lab&quot;&gt;FactMiners: A Quick Trip to the Stanford Vision Lab&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
          &lt;li class=&quot;&quot;&gt;  
  &lt;div class=&quot;views-field views-field-title&quot;&gt;        &lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;/blogs/factminers-pursuit-serious-fun-images-and-robots&quot;&gt;FactMiners: The Pursuit of Serious Fun with Images and Robots&lt;/a&gt;&lt;/span&gt;  &lt;/div&gt;&lt;/li&gt;
      &lt;/ul&gt;&lt;/div&gt;    &lt;/div&gt;
  
  
  
  
  
  
&lt;/div&gt;&lt;/div&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/BritishLibrary_Flickr_images.png&quot; width=&quot;292&quot; height=&quot;233&quot; alt=&quot;BritishLibrary_Flickr_images.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;I was thrilled to read the announcement this week in the &lt;a href=&quot;http://britishlibrary.typepad.co.uk/digital-scholarship/index.html&quot;&gt;British Library Digital Scholarship blog&lt;/a&gt; about the &lt;a href=&quot;http://www.flickr.com/photos/britishlibrary&quot;&gt;Library&#039;s uploading to the Flickr Commons of over 1 million Public Domain images&lt;/a&gt; scanned from 17th, 18th, and 19th century books in the Library&#039;s physical collections. The Flickr image collection makes the individual images easily available for public use. Currently the meta-data about each image includes the most basic source information but nothing about the image itself. In the words of project tech lead Ben O&#039;Steen:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;
	We may know which book, volume and page an image was drawn from, but we know nothing about a given image. Consider the image below. The title of the work may suggest the thematic subject matter of any illustrations in the book, but it doesn&#039;t suggest how colourful and arresting these images are.&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;a href=&quot;http://www.flickr.com/photos/britishlibrary/11075039705/&quot;&gt;&lt;img src=&quot;http://britishlibrary.typepad.co.uk/.a/6a00d8341c464853ef019b029b054d970b-800wi&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;	&lt;a href=&quot;http://www.flickr.com/photos/britishlibrary/tags/imagesfrombook001012871/&quot;&gt;See more from this book&lt;/a&gt;: &quot;Historia de las Indias de Nueva-España y islas de Tierra Firme...&quot; (1867)&lt;/p&gt;
&lt;p&gt;	We plan to launch a crowdsourcing application at the beginning of next year, to help describe what the images portray. Our intention is to use this data to train automated classifiers that will run against the whole of the content. The data from this will be as openly licensed as is sensible (given the nature of crowdsourcing) and the code, as always, will be under an open license.
&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Ben went on to explain, &quot;Which brings me to the point of this release. &lt;strong&gt;We are looking for new, inventive ways to navigate, find and display these &#039;unseen illustrations&#039;&lt;/strong&gt;.&quot;&lt;/p&gt;
&lt;p&gt;Well, Ben&#039;s challenge got me thinking... &lt;strong&gt;What would be the value of creating a FactMiners&#039; Fact Cloud Companion to the British Libary Public Domain Image Collection?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;And that&#039;s when I had my latest &quot;Eureka Moment&quot; about why the &lt;a href=&quot;http://www.softalkapple.com/blogs/factminers-more-or-less-folksonomy&quot;&gt;FactMiners social-game ecosystem&lt;/a&gt; is such a compelling idea (at least to me and a few others at this point :-) ). First, let me briefly describe what a Fact Cloud Companion would look like for the British Library Image Collection before exploring why this is such an exciting and potentially important idea.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: What?&lt;/h2&gt;
&lt;p&gt;When Ben laments that the Library&#039;s image collection does not know anything about the content of the individual images, I believe he &#039;undersold&#039; that statement by alluding to the metadata not informing us how colorful or arresting this image is. But there is a much more significant truth underlying his statement.&lt;/p&gt;
&lt;p&gt;Images are incredible &quot;compressed storage&quot; of all the &quot;facts&quot; (verbal assertions) that we instantly understand when we humans look at an image. The image Ben referenced above of the man in a ceremonial South American tribal regalia is chuck full of &quot;facts&quot; like:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;The man is wearing a mask.&lt;/li&gt;
&lt;li&gt;The man is wearing a blue tunic.&lt;/li&gt;
&lt;li&gt;The man is holding a long, pointed, wavy stick.&lt;/li&gt;
&lt;li&gt;The man has a feathered shield in his left hand.&lt;/li&gt;
&lt;li&gt;The man is standing on a fringed rug.&lt;/li&gt;
&lt;li&gt;The man has a beaded bracelet on his right arm.&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;I&#039;ve written briefly about &lt;a href=&quot;http://www.softalkapple.com/about_graphs_and_factmining&quot;&gt;how an Open Source graph database, like Neo4j, is an ideal technology for capturing FactMiners&#039; Fact Clouds&lt;/a&gt;. So I won&#039;t belabor the point by drilling down here on these example &#039;image facts&#039; to the level of graph data insertions or related queries. Suffice to say that the means are readily available to design and capture a reasonable and useful graph database of facts/assertions about what is &quot;seen&quot; in the &quot;unseen illustrations&quot; of the British Library image collection.&lt;/p&gt;
&lt;p&gt;Rather, I want to move on quickly to the &quot;A-ha Moment&quot; I had about why creating a Fact Cloud Companion to the British Library Image Collection could be a Very Good Thing.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: Why?&lt;/h2&gt;
&lt;p&gt;Every time we look at an image, our brains decompress that in an &quot;explosion of facts.&quot; By bringing image collections into the FactMiners&#039; &quot;serious play arena&quot; we are, in effect, capturing that &quot;human image decompression&quot; process as a sharable artifact rather than it being a transient individual cognitive event. In other words, every child goes through the learning process of &quot;seeing&quot; what&#039;s in a picture. When these &quot;little learning machines&quot; do a proportion of that natural childhood learning activity by playing FactMiners at the British Library Image Collection, we get a truly interesting &#039;by-product&#039; in the Fact Cloud Companion.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/danger_will_robinson.jpg&quot; width=&quot;286&quot; height=&quot;362&quot; alt=&quot;danger_will_robinson.jpg&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Beyond the obvious use of a Fact Cloud for folksonomy-class applications supporting source collection public and researcher access, a FactMiners Fact Cloud Companion of the British Library Public Domain Image Collection would be an invaluable resource for that &lt;em&gt;new emerging museum and archive visitor base...&lt;/em&gt; &lt;strong&gt;robots.&lt;/strong&gt; Well, not so much the fully anthropomorphized walking/talking robots, at least not so much just yet. I&#039;m thinking here more like machine-learning programs, specifically those with any form of &#039;image vision&#039; capability – whether by crude file/data &#039;input&#039; or real-time vision sensors.&lt;/p&gt;
&lt;p&gt;Upon entering the British Library Image Collection, our robot/machine-learning-program visitors would find a rich &#039;playground&#039; in which to hone their vision capabilities. All those Fact Cloud &#039;facts&#039; about what is &#039;seen&#039; in the collection&#039;s previously &#039;unseen images&#039; would be available at machine-thinking/learning speed to answer the litany of questions – &quot;What&#039;s that?&quot;, &quot;Is that a snake?&quot;, &quot;Is that boy under the table?&quot; – questions that a machine-learning program might use to refine its vision capabilities.&lt;/p&gt;
&lt;p&gt;So while the primary intent of the project is making these images available for Open Culture sharing and use, there may be some equally valuable side effects of this project. The British Library Image Collection and its Fact Cloud Companion could become a &quot;go-to&quot; stop for any vision-capable robot or machine-learning program that aspires to better understand the world it sees.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: How?&lt;/h2&gt;
&lt;p&gt;As the good folks at the British Library well know, just getting a good folksonomy social-tagging resource developed for such a huge collection is itself no small task. This is why museums and archives, like the British Library and those collaborating in &lt;a href=&quot;http://www.steve.museum/&quot;&gt;the steve project&lt;/a&gt;, are turning to crowdsourcing methods to get the &#039;heavy-lifting&#039; of these tasks done. Crowdsourcing goes hand-in-hand with gamification in this regard. If we can&#039;t pay you to help us out, at least we can make the work fun, right?&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_kid_playing_app.png&quot; width=&quot;460&quot; height=&quot;316&quot; alt=&quot;FactMiners_kid_playing_app.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Well, you don&#039;t have to think too hard to realize that if creating a folksonomy is a big chore, then creating a useful Fact Cloud representing at least a good chunk of the &#039;seen&#039; in the previously &#039;unseen illustrations&#039; of the British Library Image Collection is a Way Too Big Chore. And this might be true. But I think that there is some uniquely wonderful &#039;harness-able labor&#039; to be tapped in this regard. &lt;/p&gt;
&lt;p&gt;I know we can make a really fun app where parents and older folks can help kids learn by playing; building fact-by-fact a valuable resource at the British Library, for one. A learning child is a torrent of cognitive processing. Let a stream of that raw learning energy run through the FactMiners game at the British Library Image Collection and you&#039;d have critical mass in a Fact Cloud faster than you can say, &quot;Danger, Will Robinson!&quot;&lt;/p&gt;
&lt;p&gt;And where might this lead? Well, where this all might lead Big Picture wise is beyond the scope of this post. But I can see it leading to a new, previously unimagined game to add to the mix of social games available to FactMiners players... and it&#039;s a bit of a doozy. :-)&lt;/p&gt;
&lt;p&gt;If the British Library creates a FactMiners Fact Cloud Companion to its Image Collection, and if that Fact Cloud becomes useful to robots (machine-learning programs) as a vision-learning resource, I can see where we would want to add a &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency Game&lt;/strong&gt; to the FactMiners game plug-ins. What would that game be like?&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_robot_training_kids.png&quot; width=&quot;541&quot; height=&quot;409&quot; alt=&quot;FactMiners_robot_training_kids.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Well, as good as an Image Collection Fact Cloud might be to learn from, and as smart as a machine-learning program might be as a learner, a robot&#039;s learning to see isn&#039;t likely to be a fully automated process. So we create a game where one or more kids &#039;adopt&#039; a robot/machine-learning program to help it learn. In this case, the FactMiners player would gain experience points, badges, etc. by being available for &#039;vision training&#039; sessions with the adopted robot. The FactMiners player is, in effect, the referee and coach to the robot as it learns to see. &lt;/p&gt;
&lt;p&gt;It doesn&#039;t take much imagination to see how this could lead to schools fielding teams in contests to take a &#039;stock&#039; robot/machine-learning-program and train it to enter various vision recognition challenges. And when I let my imagination run with these ideas, it gets very interesting real fast. But any run, even of one&#039;s imagination, starts with a first step.&lt;/p&gt;
&lt;p&gt;Will we get a chance to make a Fact Cloud Companion to the British Library Image Collection? I don&#039;t know. This week the British Library took &lt;a href=&quot;http://britishlibrary.typepad.co.uk/digital-scholarship/2013/12/a-million-first-steps.html&quot;&gt;a million first steps&lt;/a&gt; toward making their vast digital image collection available to all for free. Perhaps the first step of posting this article will lead us on a path where we will have some serious fun working with the Library to help kids who help robots learn to see and understand our world.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;--Jim Salmons--&lt;br /&gt;
Cedar Rapids, Iowa USA&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; An encouraging reply of exploratory interest from the good folks at the British Library Labs has juiced my motivation to further &lt;a href=&quot;/blogs/factminers-introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;explore the potential for the &#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/a&gt; as a FactMiners plug-in game.&lt;/p&gt;&lt;/blockquote&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 15 Dec 2013 21:21:20 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">91 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/factminers-fact-cloud-british-library-image-collection#comments</comments>
</item>
<item>
 <title>Softalkers in Northern California Meetup Opportunity</title>
 <link>http://www.softalkapple.com/blogs/softalkers-northern-california-meetup-opportunity</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;To ex-staffers and any &#039;Friend of Softalk&#039;, I want to remind folks that I will be in northern California (San Francisco) area, November 9th through 16th. I would value any opportunity to meet with folks who have fond memories of Softalk and stories to tell and, especially, anyone who shares an interest in The Softalk Apple Project.&lt;/p&gt;
&lt;p&gt;So far, my agenda includes a visit on Tuesday, November 12th, where I will have the pleasure to spend the better part of the day and dinner out with Margot and Al Tommervik. This will be the first time I have seen them in over thirty years! While we&#039;ll have plenty to talk about in terms of what has happened in our lives since our Softalk days together, my real excitement will be the chance to explore our potential future collaboration through this project. I trust we&#039;ll have more to say about this following our meeting.&lt;/p&gt;
&lt;p&gt;I&#039;m also making part of my trip a &quot;pilgrimage&quot; to attend the &lt;a href=&quot;http://www.kickstarter.com/projects/jpf/homebrew-computer-club-reunion&quot;&gt;Homebrew Computer Club Reunion event on November 11th&lt;/a&gt;. I was too late to Kickstarter.com to contribute at a level that got an admission ticket to the event, so I may only make it to the &#039;rope line&#039; outside the event. But I am sure I will have an opportunity to commune with the moment -- and likely meet folks who have a soft spot in their hearts for Softalk magazine. My goals are simply to be there/near, and to reflect on the experience here on The Softalk Apple Project website.&lt;/p&gt;
&lt;p&gt;I also am planning a visit to &lt;a href=&quot;http://www.neotechnology.com/&quot;&gt;Neo Technology&lt;/a&gt;, the San Mateo-based technology vendor of the &lt;a href=&quot;http://www.neo4j.org/&quot;&gt;Neo4j&lt;/a&gt; Open Source graph database that we have selected for building the &lt;a href=&quot;/about_graphs_and_factmining&quot;&gt;FactMiners social-game app&lt;/a&gt; that we&#039;ll develop to create the Softalk archive &#039;Fact Cloud&#039; respository as an educational and research resource. And there are a couple other things I&#039;d like to accomplish during my visit, but these are still in the works. &lt;/p&gt;
&lt;p&gt;Regardless of whatever else may shape up, &lt;strong&gt;I still have time for you if you love Softalk and have stories to tell&lt;/strong&gt;. If you are in the area and want to explore the opportunity to get together, please contact me either through this website, or better yet, via reply post to this announcement on the &lt;a href=&quot;https://www.facebook.com/groups/20506815695&quot;&gt;Softalk Forever!!! Facebook group&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I look forward to hearing for former colleagues and other friends of Softalk.&lt;/p&gt;
&lt;p&gt;--Jim--&lt;br /&gt;
Cedar Rapids, Iowa&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Mon, 04 Nov 2013 17:50:16 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">89 at http://www.softalkapple.com</guid>
 <comments>http://www.softalkapple.com/blogs/softalkers-northern-california-meetup-opportunity#comments</comments>
</item>
</channel>
</rss>
